The first point that jumped out at me from this week’s
reading was the proposed roles of Solution Architect (SA), Application
Architect (AA) and Enterprise Solution Architect (ESA) as part of an
organization’s Enterprise Architecture (EA) function. The explanation of the responsibilities,
focus areas and deliverables makes lots of sense. I can see the benefits of dividing the focus on
a single applications architecture by an SA, generalized focus on multiple
application architectures for a business area of the AA role and enterprise
wide application architecture trends and planning focus of the ESA role.
The issue I have with these definitions are not the
definitions themselves. I find it hard
to believe that any organization would establish all of these roles at any
given point of their EA program development.
The article did concede that smaller organizations would likely have
individual staff performing multiple roles at once. I think additional research would be valuable
into how real organizations divide these roles among their staff. I would be interested to see how the roles
are divided at organizations of similar sizes.
I would anticipate the results to show that larger
organizations would implement the SA and ESA roles, due to efficiencies of
scale due to their size supporting more specialization and needs to communicate
ESA level concepts across more SA staff.
Smaller organizations would likely not be able to support so much
specialization due to their size and communications would need to distribute
across a smaller number of EA staff. I
would think smaller organizations would implement the AA role only.
A second piece of the reading that stood out to me was the
Application Architecture Future Trend paper describing the experience of
Amazon’s architecture development.
Beyond the overall picture presented describing the now trend setting
approach of Amazon’s development of AWS, one arguably small detail caught my
focus. Amazon’s approach of developing
all functions with APIs for use even by internal users appears to avoid all the
weaknesses I have seen in my career in using this approach.
For many years I have seen this approach described as a best
practice, but in practice, I have seen it very little. Further, in my own personal experience, the
approach never quite lived up to the theoretical benefits fully. Primarily, the idea that providing an API
interface insulates the participating endpoints from being impacted by each
other’s changes never quite work out. Two
main shortcomings I have seen are
- Parameter changes to the API calls drive over time the need to change the calling application, in contradiction of the promise to prevent this
- Additional functionality and data returned from calls causes problems for calling applications when additional functionality and data are not needed
The personal revelation that I discovered in the paper
involved the use of XML documents in the API calls. Formatting parameters as XML documents enable
more flexible means of calling the APIs that allow parameter changes without
changing the API signature, which addresses my first issue. And the concept of enabling the caller to
provide an XSLT parameter to empower the caller to limit returned data to meet
its needs addresses issues with data overload and efficiency was an eye
opener. I will be adding this to my own
toolkit.
One big question that was raised for me came from the
article “Time to Retire the 3 Tier Architecture”. The title caught my eye partly because I have
already come to the conclusion myself (I think the time is long past
actually). The other part was my own
struggle to understand the new environment, specifically when it comes to the
Data tier. Replacing the UI tier due to
the changes in UI devices seems easy to understand. For the mid-tier, replacement also makes
sense with the proliferation of SOA services, micro services, etc. However, I have not seen a good reference
architecture for the data tier. I have
some exposure to Big Data and the concept of Eventual Consistency, but not a
good detailed architecture for how to implement either as a replacement for a
good old fashioned self-contained relational DB. One main capability I have not seen
replicated in other approaches is speed.
Generalized services always seem to be much slower than relational DBs
with any large amount of data, and the complexities of managing eventual consistency
with all but the simplest data sets seems daunting.
My question is, has anybody found a good reference for what
either of these data approaches would look like in a replacement for the
traditional data tier?
Great question about the data tier and one that I also have wondered about. Unfortunately I don't have an answer as I struggle to see how to avoid self-contained databases due to performance. Lowering the bar a bit though, I would be thrilled if enterprises could get a handle on their data capture so that there are single points of entry and a single version of the truth. A robust services layers would then enable the disparate systems to pull from the respective master data repository rather than implementing their own data capture mechanisms. Given such a scenario, the data is still likely to be replicated across many localized databases, but at least it will be consistent.
ReplyDelete