A big part of my current job is getting different systems to work together and sometimes to work in a way not entirely intended by the original authors. For example, getting a SSO server to share account data with a CRM platform, or getting any "enterprise" system to have a reasonable user interface (enterprise software is always ugly by default). One important consideration is how much I trust the system I am working to integrate: it's more work to be paranoid, but sometimes the software is out to get you.
I tend to trust popular open source libraries, such as those included in the Apache family like Lucene, Hadoop or Cassandra. Level of activity is an important indicator of a high quality open source project. I also tend to trust self-contained libraries more than external services, since many network and availability failure modes just don't apply when code runs in the same runtime as my own application logic.
I tend to trust popular open source libraries, such as those included in the Apache family like Lucene, Hadoop or Cassandra. Level of activity is an important indicator of a high quality open source project. I also tend to trust self-contained libraries more than external services, since many network and availability failure modes just don't apply when code runs in the same runtime as my own application logic.
Conversely, I distrust closed vendor systems and open systems where the bulk of the code comes from a vendor that supports the system. There are some exceptions to this, for example vendor-managed relational databases like Oracle DB or heavily-used libraries like the Amazon AWS SDK. If the vendor's interface is changing rapidly, I will be more cautious in my approach.
Based on my level of trust, I then apply some design rules to protect the system I am building:
- I always decouple the supplied interface from my domain logic. I typically build my own class to encapsulate objects and operations in the supplied interface. None of the other code in my system is allowed to interface directly to the vendor interfaces. Then if the vendor's interface changes or if I need to use it differently in the future, there is a place to work without impacting the rest of my application logic. This technique also allows me to create a consistent domain model even when supplied libraries use very different paradigms.
- When working with any third-party system, I try to minimize changes and patches to the supplied code. This often means building a more conventional application that wraps or interfaces with the external system. Sometimes patching the vendor's code is unavoidable, but I prefer to push vendors to fix their own bugs and to run their systems in an unaltered condition.
- If I really distrust a service, I plan for it to fail. There are different ways a service fails, such as simply not responding, taking a long time to produce results or returning invalid results. Undesirable behavior is often intermittent and unprecedented. Judicious use of timeouts is a good first step when responsiveness is a concern. Automated tests and in-application monitoring can help mitigate invalid results. At the extreme, my system can take corrective actions to heal the failed component or system, for example the system could restart a failed component. There is no practical way to guarantee good behavior from a vendor product, but I try to manage the fallout when things go wrong.
Defensive design for third-party products allows software developers to create a consistent domain model, to prevent future changes from cascading through the system and to enable systems to fail in a controlled way, perhaps even the ability for a system to heal itself.
Comments
Post a Comment