Oracle outages serve as warning for companies relying on cloud technology
“Oracle engineers identified a performance issue within the back-end
infrastructure supporting the OCI Public DNS API, which prevented some incoming
service requests from being processed as expected during the impact window,” the
company said on its cloud infrastructure website. In an update, the company
said it implemented "an adaptive mitigation approach using real-time backend
optimizations and fine-tuning of DNS Load Management to handle current
requests." Oracle said that the outage caused a variety of problems for
customers. OCI customers using OCI Vault, API Gateway, Oracle Digital Assistant,
and OCI Search with OpenSearch, for example, may have received 5xx-type error or
failures (which are associated with server problems), Oracle said. Identity
customers may have experienced issues when creating and modifying new domains.
In addition, Oracle Management Cloud customers may have been unable to create
new instances or delete existing instances, Oracle said. Oracle Analytics Cloud,
Oracle Integration Cloud, Oracle Visual Builder Studio, and Oracle Content
Management customers may have encountered failures when creating new
instances.
EU parliamentary committee says 'no' to EU-US data privacy framework
In particular, the committee noted, the executive order is too vague, and leaves
US courts — who would be the sole interpreters of the policy — wiggle room to
approve the bulk collection of data for signals intelligence, and doesn’t apply
to data accessed under US laws like the Cloud Act and the Patriot Act. The
parliamentary committee's major points echoed those of many critics of the deal
in the EU, as well as the criticsm of the American Civil Liberties Union (ACLU),
which has said that the US has failed to enact meaningful surveillance reform.
... In short, the committee said that US domestic law is simply incompatible
with the GDPR framework, and that no agreement should be reached until those
laws are more in alignment. The committee’s negative response this week to the
proposed data privacy framework, however, was a nonbinding draft resolution and
though it is a sticking point, does not put a formal halt to the adoption
process, as its approval was not required to move the agreement along.
How edge devices and infrastructure will shape the metaverse experience
Cloud-native edge infrastructure can address these shortcomings and provide
optimized service chaining. It can handle a tremendous amount of data
processing while delivering cost-effective, terabit-scale performance and
reduced power consumption. In doing so, edge computing can move past closed
networking models to meet the demanding data processing requirements of the
metaverse. “Edge computing allows data to be processed at or near the data
source, implying that commands and processes will occur promptly. As the
metaverse will require massive data simultaneously, processing data quickly
and seamlessly depends on proximity,” Prasad Joshi, SVP and head of emerging
technology solutions at Infosys, told VentureBeat. “Edge computing offers the
ability to process such information on a headset or on the device, thereby
making that immersive experience much more effective.” ... The power, space
and cooling limitations of legacy architecture further exacerbate this data
surge. While these challenges impact consumer-based metaverse applications,
the stakes are much higher for enterprise use cases.
The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter
It’s not a Skynet-level supercomputer that can manipulate the real world. ...
Those feats are impressive. But combined with what appears to be an unstable
personality, a capacity to threaten individuals, and an ability to brush off
the safety features Microsoft has attempted to constrain it with, that power
could also be incredibly dangerous. Von Hagen says he hopes that his
experience being threatened by Bing makes the world wake up to the risk of
artificial intelligence systems that are powerful but not benevolent—and
forces more attention on the urgent task of “aligning” AI to human values.
“I’m scared in the long term,” he says. “I think when we get to the stage
where AI could potentially harm me, I think not only I have a problem, but
humanity has a problem.” Ever since OpenAI’s chatbot ChatGPT displayed the
power of recent AI innovations to the general public late last year, Big Tech
companies have been rushing to market with AI technologies that, until
recently, they had kept behind closed doors as they worked to make them
safer.
Machines Are Dreaming Instead of Learning
The question is—how much of the ‘data problem’ is about the quantity versus
the quality of data? To deal with this data scarcity or quantity, people are
moving away from accessing and using real data towards using synthetic data.
In a nutshell, synthetic data is artificially generated data, either
mathematically or statistically, which appears close to real-world data.
This also increases the amount of data which, in turn, increases the
accuracy of each model and removes all the existing flaws in the data. There
are many positive reasons to be attracted towards synthetic data such as
data privacy. ... One of the reasons that synthetic data is on the rise is
to tackle the bias that is present in smaller datasets. Even though larger
datasets can have poor quality data—which would require higher fine-tuning
and heavier workloads—synthetic data does not represent the quality and the
amount of variability that is present within real-world data. Synthetic data
is generated using algorithms that model the statistical properties of real
data.
Making Microservices Just the Right Size
By attempting to make smaller and simpler services, applications have become
more complex. The smaller service size is a great benefit to the individual
development team that owns that service, but the complex interconnection
between services has made the overall system architecture more involved.
We’ve essentially moved the complexity uphill. Rather than individual
developers dealing with complexity at the code level, system architects deal
with the complexity at the system level. Thus, services that are too large
are difficult to build and understand at scale. Services that are too small
simply move the complexity up to the system level. The goal, therefore, is
to find the right size. It’s like the story of Goldilocks and the Three
Bears; finding the right size for your services is challenging, and often
involves trial and error. It’s easy to build them too big or too small.
Finding the Goldilocks size can be challenging. How do you find the
Goldilocks size for your microservices? The answer depends a lot on your
organization and your application.
4 Ways To Be A Learning Leader
Constant curiosity makes learning simply part of you and your way of being.
If you're motivated and hungry to improve your skills and knowledge, you'll
learn more successfully. Professor and researcher Francesca Gino wrote,
“When our curiosity is triggered, we think more deeply and rationally about
decisions and come up with more-creative solutions.” Additionally,
developing and demonstrating a genuine interest in people and their
perspectives and interests enriches all your relationships. Start by asking
yourself what you're curious about, then think about all the topics that
extend from that. If this still feels hard, set an intention to ask one
other-oriented question per meeting or interaction. We all consume and
digest information and learning differently. Think about how you prefer to
learn in given contexts. For example, do you like to just go for it? Do you
like talking to other leaders, coaches or mentors? Maybe you like podcasts
or reading books and articles. Discover what works best for your
learning.
Malware authors leverage more attack techniques that enable lateral movement
"An increase in the prevalence of techniques being performed to conduct
lateral movement highlights the importance of enhancing threat prevention
and detection both at the security perimeter as well as inside networks,"
researchers from cybersecurity firm Picus, said in their report. Many years
ago lateral movement used to be associated primarily with advanced
persistent threats (APTs). These sophisticated groups of attackers are often
associated with intelligence agencies and governments, whose primary goals
are cyberespionage or sabotage. To achieve these goals these groups
typically take a long time to understand the network environments they
infiltrate, establish deep persistence by installing implants on multiple
systems, they identify critical servers and sensitive data stores and try to
extract credentials that gives them extensive access and privilege
escalation. APTs also used to operate in a targeted manner, going to
specific companies from specific industries that might have the secrets
their handlers are looking for.
The cost and sustainability of generative AI
More demand for AI means more demand for the resources these AI systems use,
such as public clouds and the services they provide. This demand will most
likely be met with more data centers housing power-hungry servers and
networking equipment. Public cloud providers are like any other utility
resource provider and will increase prices as demand rises, much like we see
household power bills go up seasonally (also based on demand). As a result,
we normally curtail usage, running the air conditioning at 74 degrees rather
than 68 in the summer. However, higher cloud computing costs may not have
the same effect on enterprises. Businesses may find that these AI systems
are not optional and are needed to drive certain critical business
processes. In many cases, they may try to save money within the business,
perhaps by reducing the number of employees in order to offset the cost of
AI systems. It’s no secret that generative AI systems will displace many
information workers soon.
6 quantum computing questions IT needs to ask
The challenge is the older systems' data format and fields may not be
compatible with newer systems. In addition, the fields and tables might not
contain what you'd expect. There is also the complexity of free text fields
that store keywords. Do not underestimate the challenge of making existing
data available for quantum application to work with. ... The important
question in developing quantum applications is finding tools that can
provide a 10-year lifespan with guaranteed software support. There are many
open source tools for quantum-based application development. A company could
take on one (or more) open source projects, but this can be a challenge and
a costly commitment. The issue is not only keeping your software up to date
(and retaining staff to develop it) but also to develop quantum software
that's compatible with the rest of your IT environment. When considering
lifespan, consider abandoned open source projects for quantum software
applications.
Quote for the day:
"Leadership is an opportunity to
serve. It is not a trumpet call to self-importance." --
J. Donald Walters
No comments:
Post a Comment