
We tend to think about “microservices” as small, very logic-focused services
that deal with, usually, one responsibility. However, if we look at Martin
Fowler’s definition of Microservices — and you know, he’s a very smart guy, so
we usually like to know what he thinks — you’ll notice we’re missing a very
small, yet key, trait about microservices: decoupled. Let’s take a closer look
at what we’re calling “microservice”. This term gets thrown around so much these
days that is getting to a point where it’s exactly like teenage sex: everyone
talks about it, nobody really knows how to do it, everyone thinks everyone else
is doing it, so everyone claims they are doing it. Truth be told, from 99% of
the interviews I take as a manager, when I ask about microservices I get
responses about REST APIs. And no, they’re not necessarily the same thing. And
by definition, REST APIs alone can’t be microservices, even if you split them up
into multiple smaller ones, each taking care of a single responsibility. They
can’t, because by definition for you to be able to use a REST API directly, you
need to know about it.
SREs may not think of cloud strategy as a core part of their job responsibility.
That’s a task that more commonly falls to cloud architects. But simply
encouraging their organizations to leverage more reliable cloud architectures
can be one way to improve reliability, according to the State of DevOps report.
While enhanced reliability is not the only reason why more and more
organizations are now expanding into multi-cloud and hybrid cloud architectures,
increased availability was the second most common reason for adopting one of
these strategies among the professionals whom Google surveyed. The report also
noted that organizations with multi-cloud or hybrid cloud architectures were 1.6
times more likely to meet or exceed their performance goals. The takeaway here
for SREs is that, although having more clouds to manage creates new reliability
challenges in some respects, the data clearly shows that multi-cloud and hybrid
cloud lead to better reliability outcomes in the long run. It’s time to let go
of your single cloud.

By asking questions like, “Are these applications still relevant?” or “Is this
system working?” or “How I can I make this system better?” Assess how you can
make a difference to add value and propel your organization to become an
industry leader. The complex environment, fueled by continued advances in
technology, hinders the ability of the organization to realize value. The
enterprise architecture solution will likely not deliver immediate returns (Gong
& Janssen, 2021). Kotusev (2018) noted that a rigid approach to enterprise
architecture implementation is the worst strategy. Persistent evaluation and
adaptation of the EA solution are necessary to signal the need for adaption. It
is appropriate to have parts of the EA strategy remain purposively generalized
(Alwadain, 2020; Marcinkowski & Gawin, 2019). For example, a flexible EA
solution can quickly transition to a SaaS (software as a solution) that delivers
more value than on-premises operations. Cooiman (2021) noted that considering
operations that directly support and influence portfolios, programs, projects,
and business functions, such as supply chain management and payroll.
As the report notes, previous surveys have not captured the full scope of work
happening in GovTech in a reliable way. The Open Group has, as its mission, a
long-standing focus on the open flow of information – Boundaryless Information
Flow™. Transparent information-sharing makes connected systems worth more than
the sum of their parts and makes innovation easier to spread. Likewise, the
GTMI’s clear view of where progress is being made in government digitalization
is something which will, I think, help to accelerate the modernization of public
sector services globally. Indeed, much of the report’s key insights are
concerned with ensuring that GovTech infrastructure is interconnected and
interoperable. Often, it finds, countries have discrete digitalized workflows
such as a back-office solution or an online service portal, but are yet to knit
these workflows together. Likewise, while digital workflows open the door to
two-way information flow with citizens, making services more efficient and
responsive, this has seen only limited global rollout.

Get an MMF Baseline: Even if no formal MMF exists in an organization, an
implicit one does. Technical documents mapping data architecture, the
knowledgeable business analyst who others turn to understand reporting data, and
data-entry procedures provide context around an organization’s data and pieces
of its MMF. Getting a baseline about what people, processes, and technology
already exist and how they inform the organization’s Metadata Management
framework just makes sense. Using a “qualified and knowledgeable data
professional (and other skilled talents) to administer and interpret data
readiness assessments” along with Data Maturity models like those put forth by
Gartner, or the Capability Maturity Model of Integration (CMMI), gives a good
MMF starting place. Be Clear About What an MMF Will Achieve: Be clear why an
organization needs to manage metadata and implement a Metadata Management
framework. Metadata Management helps reduce training costs, provides better data
usage across data systems, and simplifies communication.
EBSI is designed with a number of core principles in mind: working towards the
public good; transparent governance; data compatibility; open-source software;
and, compliance with relevant EU regulations such as the GDPR and eIDAS. EBSI
would provide a common, shared and open public infrastructure aimed at providing
and supporting a secure and interoperable ecosystem that will enable the
development, launch and operation of EU-wide cross-border digital services in
the public sector. The infrastructure will reflect European values data
sovereignty and green credentials in mind and tackle global issues – such as
climate change and supply chain corruption. EBSI would thereby deliver public
services with high requirements of scalability and throughput, interoperability,
robustness, and continuity of the service and with the highest standards of
security and privacy that will allow public administrations and their ecosystems
to verify information and make services trustworthy. This infrastructure should
be deployed within a period of 3 years.

The right tooling will help you put your governance framework into practice,
providing the necessary guardrails and data visibility that your teams need to
boost trust and confidence in their data analysis. Perhaps the most fundamental
tool for data governance—certainly the greatest help for us here at Tableau—is
our integrated data catalog. This enables employees to see data details like
definitions and formulas, lineage and ownership information, as well as
important data quality notifications, from certification status to events, like
if a data source refresh failed and the information isn’t up to date. A data
catalog boosts the visibility of valuable metadata right in people’s
workstreams, whether that metadata lives in Tableau or is brought in from an
external metadata management system via an API. This also helps IT with impact
analysis and change management, to understand who and which assets are affected
downstream when changes are made to a table.

A centralized DLT is not immutable. The ledger can be rewritten arbitrarily by
the one (or more) who controls it or due to a cyberattack. Because of its open
and competitive nature (mining, staking, etc.), any blockchain can achieve
immutability and hence its records will be credible. Thousands of independent
nodes can ensure an unprecedented level of resistance to any sort of attack.
Usually, it comes next after the discussion about immutability. How to correct a
mistake? What if you need to change your smart contract? What if you lost your
private key? There is nothing you can do retroactively — alteration in the
blockchain is impossible. What’s done is done. In this regard, the DLT is
usually the opposite of an alternative to blockchain. You will hear that DLTs
can be designed so that those who control the network verify transactions on
entry and therefore, non-compliant transactions are not allowed to pass through.
But it would be a fallacy to think that censorship in the network will
ultimately exclude all mistakes and unwanted transactions. There will always be
a chance for a mistake.

The extensive documentation, verified by third party brokers, that has
underpinned trading and commercial agreements in the past is at odds with
digital ways of working. The same steps of these processes need to be
maintained, but conducted through digital interfaces that are more open and more
complex.Distributed Ledger Technologies (DLT) can fill this gap. Distributed
ledger describes the approach of creating equal decentralized copies of
transactions, instead of storing them in one central place (ie a database for
digital, or a document for analogue). What makes DLT so exciting and relevant is
that it was conceived and developed for this decentralized digital world where
trust is at a premium. Instead of being built on existing relationships, trust
can be anchored in encrypted processes (the so-called consensus algorithms),
which control the transactions. It's not simply a case of storing the
information safely that creates trust, it's also how it's collected. DLT can
determine the conditions under which nodes of the decentralized infrastructure
capture and record new transactions and when they do not.
In general, agile teams work with robust methods and practices across different
groups and their ecosystem. Tools-driven approach and automated engineering
enable building a continuous and connected ecosystem where captured feedback and
user behavior are analyzed and actioned. Automated engineering helps in making
and delivering a better customer experience for the users. Digital-first does
not work in silos; it builds products and platforms to connect and create an
ecosystem. Traditionally, we dealt with effort, counts, rollback, monthly
release, etc.; under the guise of agile, KPIs were to suit the management
communication pattern and reporting. Modern-day engineering focuses on the
outcome. Failure is noticed and fixed rapidly, but how quickly and relative
improvements are the real questions. In this ecosystem, the end customer sees
the change immediately. The measurement of success of the ecosystem has several
performance indicators like MTTX, lead time /cycle time, deployment rates, etc.,
on the development side.
Quote for the day:
"We get our power from the people we
lead, not from our stars and our bars." -- J. Stanford
No comments:
Post a Comment