Five areas where EA matters more than ever
While resiliency has always been a focus of EA, “the focus now is on proactive
resiliency” to better anticipate future risks, says Barnett. He recommends
expanding EA to map not only a business’ technology assets but all its processes
that rely on vendors as well as part-time and contract workers who may become
unavailable due to pandemics, sanctions, natural disasters, or other
disruptions. Businesses are also looking to use EA to anticipate problems and
plan for capabilities such as workload balancing or on-demand computing to
respond to surges in demand or system outages, Barnett says. That requires
enterprise architects to work more closely with risk management and security
staff to understand dependencies among the components in the architecture to
better understand the likelihood and severity of disruptions and formulate plans
to cope with them. EA can help, for example, by describing which cloud providers
share the same network connections, or which shippers rely on the same ports to
ensure that a “backup” provider won’t suffer the same outage as a primary
provider, he says.
Build or Buy? Developer Productivity vs. Flexibility
To make things a bit more concrete, let’s look at a very simple example that
shows the positives of both sides. Developers are the primary audience for
InfluxData’s InfluxDB, a time series database. It provides both client libraries
and direct access to the database via API to give developers an option that
works best for their use case. The client libraries provide best practices out
of the box so developers can get started reading and writing data quickly.
Things like batching requests, retrying failed requests and handling
asynchronous requests are taken care of so the developer doesn’t have to think
about them. Using the client libraries makes sense for developers looking to
test InfluxDB or to quickly integrate it with their application for storing time
series data. On the other hand, developers who need more flexibility and control
can choose to interact directly with InfluxDB’s API. Some companies have lengthy
processes for adding external dependencies or already have existing internal
libraries for handling communication between services, so the client libraries
aren’t an option.
Enterprises shore up supply chain resilience with data
“Digital dialogue between trading partners is crucial, not just for those two
[direct trading partners], but also for the downstream effects,” he says,
adding that when it comes to supply chains and procurement, SAP’s focus is on
helping its customers ensure that the data “flows to the right trading
partners so that they can make proactive decisions in moving assets, logistics
and doing the right purchasing”. He further adds that where supply chain
considerations have traditionally been built around “cost, control and
compliance”, companies are now looking to incorporate “connectivity,
conscience and convenience” alongside those other factors. On the last point
regarding convenience, Henrik says this refers to having “information at my
fingertips when I need it”, meaning it is important for companies to not only
collect data on their operations, but to structure it in a way that drives
actionable insights. “Once you have actionable insights from the data, then
real change happens, and that’s really what companies are looking for,” he
says.
Ransomware is already out of control. AI-powered ransomware could be 'terrifying.'
If attackers were able to automate ransomware using AI and machine learning,
that would allow them to go after an even wider range of targets, according to
Driver. That could include smaller organizations, or even individuals. "It's
not worth their effort if it takes them hours and hours to do it manually. But
if they can automate it, absolutely," Driver said. Ultimately, “it's
terrifying.” The prediction that AI is coming to cybercrime in a big way is
not brand new, but it still has yet to manifest, Hyppönen said. Most likely,
that's because the ability to compete with deep-pocketed enterprise tech
vendors to bring in the necessary talent has always been a constraint in the
past. The huge success of the ransomware gangs in 2021, predominantly
Russia-affiliated groups, would appear to have changed that, according to
Hyppönen. Chainalysis reports it tracked ransomware payments totaling $602
million in 2021, led by Conti's $182 million. The ransomware group that struck
the Colonial Pipeline, DarkSide, earned $82 million last year, and three other
groups brought in more than $30 million in that single year, according to
Chainalysis.
Will quantum computing ever be available off-the-shelf?
Quantum computing will never exist in a vacuum, and to add value, quantum
computing components need to be seamlessly integrated with the rest of the
enterprise technology stack. This includes HPC clusters, ETL processes, data
warehouses, S3 buckets, security policies, etc. Data will need to be processed
by classical computers both before and after it runs through the quantum
algorithms. This infrastructure is important: any speedup from quantum
computing can easily be offset by mundane problems like disorganized data
warehousing and sub-optimal ETL processes. Expecting a quantum algorithm to
deliver an advantage with a shoddy classical infrastructure around it is like
expecting a flight to save you time when you don’t have a car to take you to
and from the airport. These same infrastructure issues often arise in many
present-day machine learning (ML) use cases. There may be many off-the-shelf
tools available, but any useful ML application will ultimately be unique to
the model’s objective and the data used to train it.
Addressing the skills shortage with an assertive approach to cybersecurity
All too often, businesses do not see investing in security strategy and
technologies as a priority – until an attack occurs. It might be the
assumption that only the wealthiest industries or those with highly classified
information would require the most up-to-date cybersecurity tactics and
technology, but this is simply not the case. All organizations need to adopt a
proactive approach to security, rather than having to deal with the aftermath
of an incident. By doing so, companies and organizations can significantly
mitigate any potential damage. Traditionally, security awareness may have been
restricted to specific roles, meaning only a select few people having the
training and understanding required to deal with cyber-attacks. Nowadays every
role, at every level, in all industries must have some knowledge to secure
themselves and their work against breaches. Training should be made available
for all employees to increase their awareness, and organizations need to
prioritize investment in secure, up-to-date technologies to ensure their
protection.
Easily Optimize Deep Learning with 8-Bit Quantization
There are two challenges with quantization: How to do it easily - In the past,
it has been a time consuming process; and How to maintain accuracy. Both of
these challenges are addressed by the Neural Network Compression Framework
(NNCF). NNCF is a suite of advanced algorithms for optimizing machine learning
and deep learning models for inference in the Intel® Distribution of
OpenVINOTM toolkit. NNCF works with models from PyTorch and TensorFlow. One of
the main features of NNCF is 8-bit uniform quantization, using recent academic
research to create accurate and fast models. The technique we will be covering
in this article is called quantization-aware training (QAT). This method
simulates the quantization of weights and activations while the model is being
trained, so that operations in the model can be treated as 8-bit operations at
inference time. Fine tuning is used to restore the accuracy drop from
quantization. QAT has better accuracy and reliability than carrying out
quantization after the model has been trained. Unlike other optimization
tools, NNCF does not require users to change the model manually or learn how
the quantization works.
Apache Druid: A Real-Time Database for Modern Analytics
With its distributed and elastic architecture, Apache Druid prefetches data
from a shared data layer into an infinite cluster of data servers. Because
there’s no need to move data and you’re providing more flexibility to scale,
this kind of architecture performs quicker as opposed to a decoupled query
engine such as a cloud data warehouse. Additionally, Apache Druid can process
more queries per core by leveraging automatic, multilevel indexing that is
built into its data format. This includes a global index, data dictionary and
bitmap index, which goes beyond a standard OLAP columnar format and provides
faster data crunching by maximizing CPU cycles. ... Apache Druid provides a
smarter and more economical choice because of its optimized storage and query
engine that decreases CPU usage. “Optimized” is the keyword here; you want
your infrastructure to serve more queries in the same amount of time rather
than having your database read data it doesn’t need to.
Compete to Communicate on Cybersecurity
At its core, cybersecurity depends on communication. Outdated security
policies that are poorly communicated are equally as dangerous as substandard
software code and other flawed technical features. Changing human behavior in
digital security falls on the technology companies themselves, which need to
improve explaining digital security issues to their employees and customers.
In turn, tech companies can help employees and customers understand what they
can do to make things better and why they need to be active participants in
helping to defend themselves, our shared data and digital infrastructure.
Instead of competing on the lowest price or claims of best service, how do we
incentivize service vendors, cloud providers, device manufacturers and other
relevant technology firms to pay more attention to how they communicate with
users around security? Rules and regulations? Possibly. Improving how
companies communicate and train on security? Absolutely. Shaping a marketplace
where tech companies compete more intensively for business on the technical
and training elements of security? Definitely.
A philosopher's guide to messy transformations
In the domain of expertise, people base their understanding of transformation
on practical insight into the history and culture of the company. A question
from an attendee on the panel I conducted illustrated this nicely: “How do you
get an organization with a legacy of being extremely risk averse to embrace
agility, which can be perceived as a more risky, trial-and-error approach?”
The question acknowledges and accepts that the company needs to embrace
agility but demonstrates neither insight nor interest as to why it needs to do
so. Whether the questioner trusts senior management’s decision to embrace
agility, or she has other reasons for ignoring the “why,” it is obvious that
she wants to know about the “how.” Too often leaders forget about the how. And
that can be a costly mistake. ... “When you have an organization that has been
organically growing over 90 years, then the culture is embedded in the
language and the behaviors of the people working in the organization,” he
said. The strength of legacy companies is that their culture is defined by
conversations and behaviors that have been evolving for decades.
Quote for the day:
"The great leaders are like best
conductors. They reach beyond the notes to reach the magic in the
players." -- Blaine Lee
No comments:
Post a Comment