DevOps Is Dead, Long Live AppOps
The NoOps trend aims to remove all the frictions between development and the
operation simply removing it, as the name tells. This may seem a drastic
solution, but we do not have to take it literally. The right interpretation —
the feasible one — is to remove as much as possible the human component in the
deployment and delivery phases. That approach is naturally supported by the
cloud that helps things to work by themself. ... One of the most evident
scenarios that explain the benefit of AppOps is every application based on
Kubernetes. If you will open each cluster you will find a lot of
pod/service/deployment settings that are mostly the same. In fact, every PHP
application has the same configuration, except for parameters. Same for Java,
.Net, or other applications. The matter is that Kubernetes is agnostic to the
content of the host's applications, so he needs to inform it about every detail.
We have to start from the beginning for all new applications even if the
technology is the same. Why? I should explain only once how a PHP application is
composed.
Thrill-K: A Blueprint for The Next Generation of Machine Intelligence
Living organisms and computer systems alike must have instantaneous knowledge to
allow for rapid response to external events. This knowledge represents a direct
input-to-output function that reacts to events or sequences within a
well-mastered domain. In addition, humans and advanced intelligent machines
accrue and utilize broader knowledge with some additional processing. I refer to
this second level as standby knowledge. Actions or outcomes based on this
standby knowledge require processing and internal resolution, which makes it
slower than instantaneous knowledge. However, it will be applicable to a wider
range of situations. Humans and intelligent machines need to interact with vast
amounts of world knowledge so that they can retrieve the information required to
solve new tasks or increase standby knowledge. Whatever the scope of knowledge
is within the human brain or the boundaries of an AI system, there is
substantially more information outside or recently relevant that warrants
retrieval. We refer to this third level as retrieved external knowledge.
GitHub’s Journey From Monolith to Microservices
Good architecture starts with modularity. The first step towards breaking up a
monolith is to think about the separation of code and data based on feature
functionalities. This can be done within the monolith before physically
separating them in a microservices environment. It is generally a good
architectural practice to make the code base more manageable. Start with the
data and pay close attention to how they’re being accessed. Make sure each
service owns and controls access to its own data, and that data access only
happens through clearly defined API contracts. I’ve seen a lot of cases where
people start by pulling out the code logic but still rely on calls into a shared
database inside the monolith. This often leads to a distributed monolith
scenario where it ends up being the worst of both worlds - having to manage the
complexities of microservices without any of the benefits. Benefits such as
being able to quickly and independently deploy a subset of features into
production. Getting data separation right is a cornerstone in migrating from a
monolithic architecture to microservices.
Data Strategy vs. Data Architecture
By being abstracted from the problem solving and planning process, enterprise
architects became unresponsive, he said, and “buried in the catacombs” of IT.
Data Architecture needs to look at finding and putting the right mechanisms in
place to support business outcomes, which could be everything from data systems
and data warehouses to visualization tools. Data architects who see
themselves as empowered to facilitate the practical implementation of the
Business Strategy by offering whatever tools are needed will make decisions that
create data value. “So now you see the data architect holding the keys to a lot
of what’s happening in our organizations, because all roads lead through data.”
Algmin thinks of data as energy, because stored data by itself can’t accomplish
anything, and like energy, it comes with significant risks. “Data only has value
when you put it to use, and if you put it to use inappropriately, you can create
a huge mess,” such as a privacy breach. Like energy, it’s important to focus on
how data is being used and have the right controls in place.
Why CISA’s China Cyberattack Playbook Is Worthy of Your Attention
In the new advisory, CISA warns that the attacks will also compromise email and
social media accounts to conduct social engineering attacks. A person is much
more likely to click on an email and download software if it comes from a
trusted source. If the attacker has access to an employee's mailbox and can read
previous messages, they can tailor their phishing email to be particularly
appealing – and even make it look like a response to a previous message. Unlike
“private sector” criminals, state-sponsored actors are more willing to use
convoluted paths to get to their final targets, said Patricia Muoio, former
chief of the NSA’s Trusted System Research Group, who is now general partner at
SineWave Ventures. ... Private cybercriminals look for financial gain. They
steal credit card information and health care data to sell on the black market,
hijack machines to mine cryptocurrencies, and deploy ransomware. State-sponsored
attackers are after different things. If they plan to use your company as an
attack vector to go after another target, they'll want to compromise user
accounts to get at their communications.
Breaking through data-architecture gridlock to scale AI
Organizations commonly view data-architecture transformations as “waterfall”
projects. They map out every distinct phase—from building a data lake and data
pipelines up to implementing data-consumption tools—and then tackle each only
after completing the previous ones. In fact, in our latest global survey on data
transformation, we found that nearly three-quarters of global banks are
knee-deep in such an approach.However, organizations can realize results faster
by taking a use-case approach. Here, leaders build and deploy a minimum viable
product that delivers the specific data components required for each desired use
case (Exhibit 2). They then make adjustments as needed based on user feedback.
... Legitimate business concerns over the impact any changes might have on
traditional workloads can slow modernization efforts to a crawl. Companies often
spend significant time comparing the risks, trade-offs, and business outputs of
new and legacy technologies to prove out the new technology. However, we find
that legacy solutions cannot match the business performance, cost savings, or
reduced risks of modern technology, such as data lakes.
Data-Intensive Applications Need Modern Data Infrastructure
Modern applications are data-intensive because they make use of a breadth of
data in more intricate ways than anything we have seen before. They combine data
about you, about your environment, about your usage and use that to predict what
you need to know. They can even take action on your behalf. This is made
possible because of the data made available to the app and data infrastructure
that can process the data fast enough to make use of it. Analytics that used to
be done in separate applications (like Excel or Tableau) are getting embedded
into the application itself. This means less work for the user to discover the
key insight or no work as the insight is identified by the application and
simply presented to the user. This makes it easier for the user to act on the
data as they go about accomplishing their tasks. To deliver this kind of
application, you might think you need an array of specialized data storage
systems, ones that specialize in different kinds of data. But data
infrastructure sprawl brings with it a host of problems.
The Future of Microservices? More Abstractions
A couple of other initiatives regarding Kubernetes are worth tracking. Jointly
created by Microsoft and Alibaba Cloud, the Open Application Model (OAM) is a
specification for describing applications that separate the application
definition from the operational details of the cluster. It thereby enables
application developers to focus on the key elements of their application rather
than the operational details of where it deploys. Crossplane is the
Kubernetes-specific implementation of the OAM. It can be used by organizations
to build and operate an internal platform-as-a-service (PaaS) across a variety
of infrastructures and cloud vendors, making it particularly useful in
multicloud environments, such as those increasingly commonly found in large
enterprises through mergers and acquisitions. Whilst OAM seeks to separate out
the responsibility for deployment details from writing service code, service
meshes aim to shift the responsibility for interservice communication away from
individual developers via a dedicated infrastructure layer that focuses on
managing the communication between services using a proxy.
Navigating data sovereignty through complexity
Data sovereignty is the concept that data is subject to the laws of the country
which it is processed in. In a world where there is a rapid adoption of SaaS,
cloud and hosted services, it becomes obvious to see the issues that data
sovereignty can have. In simpler times, data wasn’t something businesses needed
to be concerned about and could be shared and transferred freely with no
consequence. Businesses that also had a digital presence operated on a small
scale and with low data demands hosted on on-premise infrastructure. This meant
that data could be monitored and kept secure, much different from the more
distributed and hybrid systems that many businesses use today. With so much data
sharing and lack of regulation, it all came crashing down with the Cambridge
Analytica scandal in 2016, promoting strict laws on privacy. ... When dealing
with on-premise infrastructure, governance is clearer, as it must follow the
rules of the country it’s in. However, when it’s in the cloud, a business can
store its data in any number of locations regardless of where the business
itself is.
How security leaders can build emotionally intelligent cybersecurity teams
EQ is important, as it has been found by Goleman and Cary Cherniss to positively
influence team performance and to cultivate positive social exchanges and social
support among team members. However, rather than focusing on cultivating EQ,
cybersecurity leaders such as CISOs and CIOs are often preoccupied by day-to-day
operations (e.g., dealing with the latest breaches, the latest threats, board
meetings, team meetings and so on). In doing so, they risk overlooking the
importance of the development and strengthening of their own emotional
intelligence (EQ) and that of the individuals within their teams. As well as EQ
considerations, cybersecurity leaders must also be conscious of the team’s
makeup in terms of gender, age and cultural attributes and values. This is very
relevant to cybersecurity teams as they are often hugely diverse. Such values
and attributes will likely introduce a diverse set of beliefs defined by how and
where an individual grew up and the values of their parents.
Quote for the day:
"The mediocre leader tells The good
leader explains The superior leader demonstrates The great leader inspires."
-- Buchholz and Roth
No comments:
Post a Comment