How to master microservices data architecture design
Optimizing microservices applications for data management takes the right
combination of application design and database technology. It isn't a matter
of simply choosing one database model over another, or placing a database in
some external container. Instead, it comes down to staying focused on a set of
database attributes known as ACID: atomicity, consistency, isolation and
durability. Atomicity dictates that database operations should never be left
partially complete: It either happens, or it doesn't. These operations
shouldn't be parsed or broken out into smaller sets of independent tasks.
Consistency means that the database never violates the rules that govern how
it handles failures. For example, if a multistep change fails halfway through
execution, the database must always roll back the operation completely to
avoid retaining inaccurate data. Isolation is the principle that every
single database transaction should operate without relying on or affecting the
others. This allows the database to consistently accommodate multiple
operations at once while still keeping its own failures
contained. Durability is another word for a database's resilience.
Architects should always plan for failure and disruptions by implementing the
appropriate rollback mechanisms, remaining mindful of couplings, and regularly
testing the database's response to certain failures.
Building a Self-Service Cloud Services Brokerage at Scale
The concepts and architecture behind a cloud brokerage are continually
evolving. In a recent cloud brokerage survey on pulse.qa, an online community
of IT executives, 29 % of the respondents answered that they outsourced the
development of their cloud brokerage to a regional systems integrator (SI) or
professional services firms. More interesting, is that 56% of the respondents
built and launched their brokerage using a hybrid team of their own staff and
expert outside contractors. When choosing a third-party SI or professional
services firm, look for a provider with experience building brokerages for
other customers like your organization. You should also investigate their
strategic alliances with the CSPs and tools providers your organization
requires in your brokerage. When it comes to expert outside contractors, the
same rules apply. You might get lucky with finding such highly skilled
contractors through contingent staffing firms – the so-called body shops – if
you’re willing to go through enough resumes. However, when finding contractors
for your cloud brokerage you’ll probably need to exercise your own team
member’s professional networks to find the right caliber of cloud contractor.
The Future of Developer Careers
As a developer in a world with frequent deploys, the first few things I want
to know about a production issue are: When did it start happening? Which build
is, or was, live? Which code changes were new at that time? And is there
anything special about the conditions under which my code is running? The
ability to correlate some signal to a specific build or code release is table
stakes for developers looking to grok production. Not coincidentally, “build
ID” is precisely the sort of “unbounded source” of metadata that traditional
monitoring tools warn against including. In metrics-based monitoring systems,
doing so commits to an infinitely increasing set of metrics captured,
negatively impacting the performance of that monitoring system AND with the
added “benefit” of paying your monitoring vendor substantially more for it.
Feature flags — and the combinatorial explosion of possible parameters when
multiple live feature flags intersect — throw additional wrenches into
answering Question 1. And yet, feature flags are here to stay; so our tooling
and techniques simply have to level up to support this more flexibly defined
world. ... A developer approach to debugging prod means being able to
isolate the impact of the code by endpoint, by function, by payload type, by
response status, or by any other arbitrary metadata used to define a test
case.
Brain researchers get NVMe-over-RoCE for super-fast HPC storage
NVMe-over-fabrics is a storage protocol that allows NVMe solid-state drives
(SSDs) to be treated as extensions of non-volatile memory connected via the
server PCIe bus. It does away with the SCSI protocol as an intermediate layer,
which tends to form a bottleneck, and so allows for flow rates several times
faster compared to a traditionally connected array. NVMe using RoCE is an
implementation of NVMe-over-Fabrics that uses pretty much standard Ethernet
cables and switches. The benefit here is that this is an already-deployed
infrastructure in a lot of office buildings. NVMe-over-RoCE doesn’t make use
of TCP/IP layers. That’s distinct from NVMe-over-TCP, which is a little less
performant and doesn’t allow for storage and network traffic to pass across
the same connections. “At first, we could connect OpenFlex via network
equipment that we had in place, which was 10Gbps. But it was getting old, so
in a fairly short time we moved to 100Gbps, which allowed OpenFlex to flex its
muscles,” says Vidal. ICM verified the feasibility of the deployment with its
integration partner 2CRSi, which came up with the idea of implementing
OpenFlex like a SAN in which the capacity would appear local to each
workstation.
Understanding Zapier, the workflow automation platform for business
In addition to using Zapier to connect workflows, companies have turned to it
for help during the COVID-19 pandemic. Foster said his company has helped
smaller firms move their business online quickly, connecting and updating
various applications such as CRM records. “Many small business owners
don’t have the technical expertise or someone on staff that can build these
sites for them,” he said. “So they turn to no-code tools to create
professional websites, and built automations with Zapier to reach new
customers, manage inventory, and ensure leads didn’t slip through the
cracks.” Saving employees time spent on repetitive tasks is a common
benefit, said Andrew Davison, founder of Luhhu, a UK-based workflow automation
consultancy and Zapier expert. He pointed to the amount of time wasted when
workers have to key in the same data in different systems; that situation is
only getting worse as businesses rely on more and more apps. “Zapier can
eliminate this, meaning staffing costs can be reduced outright, or staff can
be redeployed to more meaningful, growth-orientated work,” he said. “And human
error with data entry is avoided — which can definitely be an important thing
for some businesses in sensitive areas — like legal, for example.”
Microsoft's low-code tools: Now everyone can be a developer
Microsoft's new wave of low- and no-code tools in the Power Platform builds on
this, providing tooling for UI construction, for business process automation,
and for working with data. This fits in well with the current demographic
shifts, with new entrant workers coming from the generation that grew up with
open-world building games like Minecraft. Low-code tools might not look like
Minecraft worlds, but they give users the same freedom to construct a work
environment. There's a lot of demand, as Charles Lamanna, Microsoft CVP,
Low Code Application Platform, notes: "Over 500 million new apps will be built
during the next five years, which is more than all the apps built in the last
40 years." Most of those apps need to be low-code, as there's more than
an app gap -- there's also a developer gap, as there's more demand for
applications than there are developers to build that code. Much of that
demand is being driven by a rapid, unexpected, digital transformation. People
who suddenly find themselves working from home and outside the normal office
environment need new tools to help manage what were often manual business
processes. The asynchronous nature of modern business makes no-code
tooling an easy way of delivering these new applications, as Lamanna notes:
"It's kind of come into its own over the last year with the fastest period of
adoption we've ever seen across the board from like a usage point of view, and
that's just because of all these trends are coming to a head right now."
How to build the right data architecture for customer analytics
Whatever your tools, they’re only as good as the data that feeds them – so
when building any data architecture, you need to pay attention to the
foundations. Customer data platforms (CDPs) are the way to go for this, as
they centralise, clean and consolidate all the data your business is
collecting from thousands of touchpoints. They coordinate all of your
different data sources – almost like the conductor in an orchestra – and
channel that data to all the places you need it. As a central resource, a CDP
eliminates data silos and ensures that every team across your company has live
access to reliable, consistent information. CDPs can also segment
customer data – sorting it into audiences and profiles – and most importantly,
can easily integrate with the types of analytics or marketing tools already
mentioned. CDPs are often seen as a more modern replacement for DMP (Data
management platform) and CRM (customer relationship management) systems, which
are unsuited to the multiplicity of digital customer touchpoints that
businesses now have to deal with. ... When you have the basics in place, deep
learning and artificial intelligence can allow you to go further. These
cutting-edge applications learn from existing customer data to take the
experience to the next level, for instance by automatically suggesting new
offers based on past behaviour.
Staying Flexible With Hybrid Workplaces
Once employers start tracking the ways in which their teams communicate and
learn, they can begin to find solutions to better spread that knowledge. For
example, is most of the learning coming from an outdated employee handbook, or
is there one person on the team that everyone goes to when there’s a question?
Is that technology that you’re using causing more confusion - and do you see
your team focusing on workarounds as opposed to the ideal solution?
Technology and tools should be our friends. And it’s in the best interest of
your organization to understand how people use them. That way you can optimize
the ones in place. Or find something that’s more suitable to your specific
needs. If you see that your workforce is spending unneeded energy
wrestling with clunky software. Or they bypass certain guidelines and
processes for something simpler, then you have a disconnect. And this issue is
only going to widen when your teams are driven apart by distance. Which will
inevitably damage productivity, efficiency, and project success. Getting
feedback from employees is the most effective way to uncover these learning
processes. Whether this is done through internal surveys or in recurring
check-ins. Through this feedback, you can weed out what isn’t working from
what is.
Edge computing in hybrid cloud: 3 approaches
The edge tier is a small and inexpensive device that mounts on the motorcycle,
which uses direct Bluetooth communication to connect with a dozen sensors on
the bike, as well as a smartwatch that the rider wears to monitor
biotelemetry. Finally, a Lidar-based scanner tracks other moving vehicles near
the bike, including ones that are likely to be a threat. The data the edge
device gathers is also responsible for real-time alerting for things such as
speed, behavior, and direction of other close vehicles that are likely to put
the rider at risk. This alerts the rider about hazardous road conditions and
obstacles such as gravel or ice, as well as issues with the motorcycle itself,
such as overheated brakes that may take longer to stop, a lean angle that's
too aggressive for your current speed, and hundreds of other conditions that
will generate alerts to the rider to avoid accidents. Moreover, the edge
device will alert the rider if heart rate, blood pressure, or other vitals
exceed a threshold. Keep in mind that you need the edge device here to
deal instantaneously with data such as speed, blood pressure, the truck about
to rear-end the rider, and so on. However, it makes sense to transmit the data
to a public cloud for deeper processing—for example, the ability to understand
emerging patterns that may lead up to an accident, or even bike maintenance
issues that could lead to a dangerous situation.
Using Agile with a Data Science Team
The idea for applying agile to data science was that all four steps would be
completed in each sprint and there would be a demo at the end. When applied this
way, they could understand together if the agile model was feasible or not.
Satti conducted agile ways of working sessions with the team to teach them the
importance of collaboration, interactions, respect, ownership, improvement,
learning cycles and delivering value. The team had to go through a cultural and
mind shift change because they believed that agile in data science would only
work if data scientists understood and trusted the advantages of agile, Satti
said. The main benefit of introducing agile to the team was that they saw an
immediate increase in productivity, as the team members were clear on their
priorities and were able to focus on the specific task, Satti said. Due to this,
the team was able to commit to deliverables and timelines. Most of the time the
committed deadlines were met, making the stakeholders happy, hence increasing
the confidence in the team. Having the buy-in of their Data Science team was
quite crucial and they had to be taken through a journey of agile instead of
forcing it on them, Satti mentioned.
Quote for the day:
"Blessed are the people whose leaders can look destiny in the eye without flinching but also without attempting to play God" -- Henry Kissinger
No comments:
Post a Comment