How connected automation will release the potential of IoT
Connected automation is an industry-first, no-code, highly secure,
software-as-a-service layer that IoT devices can easily connect to. It
intelligently orchestrates multi-vendor software robots, API mini-robots, AI,
and staff: all operating together in real-time as an augmented digital
workforce. It’s a hyper-productive digital workforce delivering high-speed,
data-rich, end-to-end processes that enable IoT devices to instantly
inter-communicate and securely work with physical and digital systems of all
ages, sizes, and complexities – at scale. So, for the first-time,
investments in IoT can deliver their true potential, but without huge
investments in changing existing systems. ... So, when human judgement is
required, handoffs arrive via robot-created, sophisticated, intuitive, digital
user interfaces – all in real-time. Where augmented insights are instantly
required within IoT initiated processes, AI or other smart tools are used to
escalate with predictive analysis and problem-solving capabilities, in
real-time. And once decisions are made, by people or AI, they can immediately be
actioned, yet without having to make major changes to existing systems or
processes.
Internet Outages Could Spread as Temperatures Rise. Here's What Big Tech Is Doing
We need data centers to be close to populations, but that means their
climatological impact is local, too. "If we don't address climate change, we
really will be toast," former Google CEO and chairman Eric Schmidt told CNBC in
April. He left the tech giant in 2017 to launch his own philanthropic firm to
support research in future-looking fields -- and found climate change harder to
ignore. "We really are putting the jeopardy of our grandchildren,
great-grandchildren and great-great-grandchildren at risk." Experts say that
data centers can be built to be kinder to the climate. But it's going to be
tough to pull off. When selecting a site for their data centers, companies like
Microsoft and Amazon prioritize access to low-cost energy, which they've
historically found in places like Silicon Valley, northern Virginia and
Dallas/Fort Worth, though Atlanta and Phoenix have been growing. They also look
for internet infrastructure from telecoms AT&T, Verizon and CenturyLink,
along with fiber providers like Charter and Comcast, to keep data
flowing.
Google AI — Reincarnating Reinforcement Learning
To overcome inefficiencies of the tabula rasa RL, Google AI introduces
Reincarnating RL — an alternative approach to RL research, where prior
computational work, such as learned models, policies, logged data, etc., is
reused or transferred between design iterations of an RL agent or from one
agent to another. Some sub-areas of Reinforcement Learning leverage prior
computation, whereas most of the RL agents are largely trained from scratch.
Until now, there has been no broader effort to leverage prior computational
work for the training workflow in RL research. The code and trained agents
have been released to enable researchers to build on this work. Reincarnating
RL is a more efficient way to train RL agents than training from scratch. This
can allow for more complex RL problems to be tackled without requiring
excessive computational resources. Furthermore, RRL can enable a bench-marking
paradigm where researchers continually improve and update existing trained
agents. The real-world RL use cases will likely be in the domains where prior
computational work is available.
Best practices for bolstering machine learning security
Given the proliferation of businesses using ML and the nuanced approaches for
managing risk across these systems, how can organizations ensure their ML
operations remain safe and secure? When developing and implementing ML
applications, Hanif and Rollins say, companies should first use general
cybersecurity best practices, such as keeping software and hardware up to
date, ensuring their model pipeline is not internet-exposed, and using
multi-factor authentication (MFA) across applications. After that, they
suggest paying special attention to the models, the data, and the interactions
between them. “Machine learning is often more complicated than other systems,”
Hanif says. “Think about the complete system, end-to-end, rather than the
isolated components. If the model depends on something, and that something has
additional dependencies, you should keep an eye on those additional
dependencies, too.” Hanif recommends evaluating three key things: your input
data, your model’s interactions and output, and potential vulnerabilities or
gaps in your data or models.
How To Be Crypto-Agile Before Quantum Computing Upends The World
To be crypto-agile means to be able to make cryptographic changes quickly and
without the burden of massive projects. That means adopting tools and
technologies that abstract away underlying cryptographic primitives and that
can change readily. To be crypto-agile is to acknowledge that change is on the
horizon and that anything built today needs to be able to adapt to coming
changes. Smart organizations are already updating existing systems and forcing
crypto-agility requirements for all new projects. This is an opportunity for
security teams to re-examine not just what algorithms they are using but also
their data protection strategies in general. Most data today is “protected”
using transparent disk or database encryption. This is low-level encryption
that makes sure the bytes are scrambled before they hit the disk but is
invisible while the machine is on. Servers stay on around the clock. A better
approach is to use application-layer encryption (ALE). ALE is an architectural
approach where data is encrypted before going to a data store. When someone
peeks at the data in the data store, they see random bytes that have no
meaning without the correct key.
What Happens if Microservices Vanish -- for Better or for Worse
The modern cloud has really accelerated the move towards those architectures.
There’s benefits and drawbacks to those architectures. There’s a lot more
moving pieces, a lot more complexity, and yet microservices offers a way to
tame some of the complexity by putting services behind API boundaries. Amazon
was very famous in the early days because Jeff Bezos required the way teams
communicate is through APIs. That created this notion that each team was
running a different service and the service was connected through software;
APIs, not human beings. That helps different teams move independently and
codify the contract between the teams, and yet there is no question that it
can be massively overdone and can be used as a tool to sweep complexity under
the rug and pretend it doesn’t exist. Once it’s behind an API, it’s easy to
just set it and forget it. The reality is, I see companies with thousands of
microservices when they probably should have had five. It can definitely be
overdone, but a spectrum is the way I think of it.
IT leaders meet the challenge to innovate frugally
When CIOs undertake this exercise, Sethi says, “they should ensure that their
biases and preferences are kept at bay. For instance, if an IT leader wants to
upgrade a system but the analysis shows it is not critical from a business,
technology, or risk perspective, it should be deferred.” This approach helps
CIOs prioritize spend. “At the end of the exercise, technology leaders may
finally come up with 50% budget for vital initiatives, 30% for essential
projects, and the balance 20% for desirable initiatives.” With budgets locked,
at whatever levels, CIOs will get the clarity to take up and sustain
innovative implementations accordingly. ... According to Singh, “one of the
most challenging aspects of innovating with budget constraints is to find a
vendor who is willing to customize and develop at a low cost. The second was
to find team members who were ready to toil hard to run and test the scenarios
in real time.” “We offered an attractive proposition to the partner company —
it was free to sell the developed solution to other customers. The partner
found it compelling enough to work for us virtually free of cost...” he
says.
How Cisco keeps its APIs secure throughout the software development process
To wrangle the complexity of the API landscape and make it more secure, Cisco
adopted a “shift-left” strategy, incorporating security earlier into the
software development process. “Shift-left security is really about
prioritizing security and bringing it to the top of mind in the day-to-day
work of a developer so they can harden their code and [decrease] the threats
from cyberattacks,” Francisco says. An API-for-an-API, a solution for which
Cisco won a 2022 CSO50 award, weaves security into the end-to-end cycle for
enterprise API services. The tool helps from code development to deployment,
live tracks APIs’ security posture while the application is in production and
integrates with API gateways. The solution tests API interfaces against
Cisco’s security policies. The end-to-end solution is meant for both
developers and DevSecOps professionals. “From a cultural perspective, we have
a lot of work left to do to break down the silos between these groups, because
they speak a different language and they’re looking at different data points,”
Francisco says.
Zero trust – what is it and why is strong authentication critical?
Zero trust was developed as a response to the new realities of our digital
world. Enterprises must grapple with the challenge of authenticating employees
in today’s hybrid/remote economy. Gartner predicts that an estimated 51 per
cent of knowledge workers were remote by the end of 2021 and a Microsoft study
found that 67 per cent of employees bring their own device. Zero trust
accommodates these modern network realities, including remote users, BYOD, and
cloud-based assets which are not located within an enterprise-owned network
boundary. A perimeter-focused security approach does little to combat insider
threats, which are one of the most serious sources of breaches today. ...
Since a zero-trust model assumes a network is always at risk of being exposed
to threats and requires all users and all devices be authenticated and
authorised, authentication plays a huge role in a zero-trust ecosystem. Zero
Trust Architecture is centred around identity and data, as the goal of
implementation is to protect access to data by specific, authorised identities
dynamically.
In a data-led world, intuition still matters
Defining the problem first and then working backward toward the data can put
you in some good company. The authors cite the example of Amazon. There, when
people have an idea for a new product or service, they have to write up a
press release and FAQs to help them and everyone else understand what it is,
how it will work, and how various contingencies would be handled. That process
helps all parties gain insight into what they really need to know to determine
if the scheme is a good idea. This sort of thing will help you focus on the
right data. But you also need to make sure the data is right. Here, again, the
authors have good advice: in both defining the problem and confronting the
data, they emphasize the importance of asking powerful, probing questions. In
particular, they recommend developing what they call “IWIK”—I Wish I
Knew—questions designed to elicit data actually relevant to making a decision.
All data, however obtained or elicited, must be rigorously interrogated. Is it
accurate? Do means and medians mask explosive outliers?
Quote for the day:
"The minute a person whose word means
a great deal to others dare to take the open-hearted and courageous way,
many others follow." -- Marian Anderson
No comments:
Post a Comment