How industrial AI will power the refining industry in the future
The ultimate vision for the industry is the self-optimising, autonomous plant
– and the increasing deployment of artificial intelligence (AI) across the
sector is bringing the reality of this ever closer. However, while refining
has been an early adopter of many digital tools, the industry is yet to fully
realise the potential of industrial AI. That is, in no small part, because AI
and machine learning are too often looked at in isolation, rather than being
combined with existing engineering capabilities – models, tools and expertise,
to deliver a practical solution that effectively optimises refinery assets.
... Machine learning is used to create the model, leveraging simulation, plant
or pilot plant data. The model also uses domain knowledge, including first
principles and engineering constraints, to build an enriched model — without
requiring the user to have deep process expertise or be an AI expert. The
solutions supported by hybrid models act as a bridge between the first
principles-focused world of the past and the “smart refinery” environment of
the future. They are the essential catalyst helping to enable the
self-optimising plant.
Microsoft's new feature uses AI to make video chat less weird
Eye Contact uses the custom artificial intelligence (AI) engine in the Surface
Pro X's SQ1 SOC, so you shouldn't see any performance degradation, as much of
the complex real-time computational photography is handed off to it and to the
integrated GPU. Everything is handled at a device driver level, so it works
with any app that uses the front-facing camera -- it doesn't matter if you're
using Teams or Skype or Slack or Zoom, they all get the benefit. There's
only one constraint: the Surface Pro X must be in landscape mode, as the
machine learning model used in Eye Contact won't work if you hold the tablet
vertically. In practice that shouldn't be much of an issue, as most
video-conferencing apps assume that you're using a standard desktop monitor
rather than a tablet PC, and so are optimised for landscape layouts. The
question for the future is whether this machine-learning approach can be
brought to other devices. Sadly it's unlikely to be a general-purpose solution
for some time; it needs to be built into the camera drivers and Microsoft here
has the advantage of owning both the camera software and the processor
architecture in the Surface Pro X.
Digital transformation: 5 ways the pandemic forced change
Zemmel says that the evolution of the role of the CIO has been accelerated as
well. He sees CIOs increasingly reporting to the CEO because they increasingly
have a dual mandate. In addition to their historical operational role running
the IT department, they now are also customer-facing and driving revenue. That
mandate is not new for forward-looking IT organizations, but the pandemic has
made other organizations hyper-aware of IT’s role in driving change quickly.
CIOs are becoming a sort of “chief influencing officer who is breaking down
silos and driving adoption of digital products,” Zemmel adds. Experian’s
Libenson puts it this way: “The pandemic has forced us to be closer to the
business than before. We had a seat at the table before. But I think we will
be a better organization after this.” The various panelists gave nods to the
role of technology, especially the use of data; Zemmel describes the second
generation of B2B digital selling as “capturing the ‘digital exhaust’ to drive
new analytic insights and using data to drive performance and create more
immersive experiences.”
Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library
Graphics APIs have come a long way from a small set of basic commands allowing
limited control of configurable stages of early 3D accelerators to very
low-level programming interfaces exposing almost every aspect of the
underlying graphics hardware. The next-generation APIs, Direct3D12 by
Microsoft and Vulkan by Khronos are relatively new and have only started
getting widespread adoption and support from hardware vendors, while
Direct3D11 and OpenGL are still considered industry standard. ... This
article describes Diligent Engine, a light-weight cross-platform graphics API
abstraction layer that is designed to solve these problems. Its main goal is
to take advantages of the next-generation APIs such as Direct3D12 and Vulkan,
but at the same time provide support for older platforms via Direct3D11,
OpenGL and OpenGLES. Diligent Engine exposes common C/C++ front-end for all
supported platforms and provides interoperability with underlying native APIs.
It also supports integration with Unity and is designed to be used as graphics
subsystem in a standalone game engine, Unity native plugin or any other 3D
application. The full source code is available for download at GitHub and is
free to use.
Supporting mobile workers everywhere
It is amazing how quickly video conferencing has been accepted as part of the
daily routine. Such is the success of services like Zoom that CIOs need to
reassess priorities. In a workforce where people are working from home
regularly, remote access is not limited to a few, but must be available to
all. Mobile access and connectivity for the mobile workforce needs to extend
to employees’ homes. Traditional VPN access has scalability limitations and is
inefficient when used to provide access to modern SaaS-based enterprise
applications. To reach all home workers, some organisations are replacing
their VPNs with SD-WANs. There is also an opportunity to revisit
bring-your-own-device (BYOD) policies. If people have access to computing at
home and their devices can be secured, then CIOs should question the need to
push out corporate laptops to home workers. While IT departments may have
traditionally deployed virtual desktop infrastructure (VDI) to stream business
applications to thin client devices, desktop as a service (DaaS) is a natural
choice to delivering a managed desktop environment to home workers. For those
organisations that are reluctant to use DaaS in the public cloud, as Oxford
University Social Sciences Division (OSSD) has found (see below), desktop
software can easily be delivered in a secure and manageable way using
containers.
Secure data sharing in a world concerned with privacy
Compliance costs and legal risks are prompting companies to consider an
innovative data sharing method based on PETs: a new genre of technologies which
can help them bridge competing privacy frameworks. PETs are a category of
technologies that protect data along its lifecycle while maintaining its
utility, even for advanced AI and machine learning processes. PETs allow their
users to harness the benefits of big data while protecting personally
identifiable information (PII) and other sensitive information, thus maintaining
stringent privacy standards. One such PET playing a growing role in
privacy-preserving information sharing is Homomorphic Encryption (HE), a
technique regarded by many as the holy grail of data protection. HE enables
multiple parties to securely collaborate on encrypted data by conducting
analysis on data which remains encrypted throughout the process, never exposing
personal or confidential information. Through HE, companies can derive the
necessary insights from big data while protecting individuals’ personal details
– and, crucially, while remaining compliant with privacy legislation because the
data is never exposed.
When -- and when not -- to use cloud native security tools
Cloud native security tools like Amazon Inspector and Microsoft Azure Security
Center automatically inspect the configuration of common types of cloud
workloads and generate alerts when potential security problems are detected.
Google Cloud Data Loss Prevention and Amazon Macie provide similar
functionality for data by automatically detecting sensitive information that
is not properly secured and alerting the user. To protect data even further
there are tools, such as Amazon GuardDuty and Azure Advanced Threat
Protection, that monitor for events that could signal security issues within
cloud-based and on-premises environments. ... IT teams use services like
Google Cloud Armor, AWS Web Application Firewall and Azure Firewall to
configure firewalls that control network access to applications running in the
cloud. Related tools provide mitigation against DDoS attacks that target
cloud-based resources. ... Data stored on the major public clouds can be
encrypted electively -- or is encrypted automatically by default -- using
native functionality built into storage services like Amazon S3 and Azure Blob
Storage. Public cloud vendors also offer cloud-based key management services,
like Azure Key Vault and Google Key Management Service, for securely keeping
track of encryption keys.
Four Case Studies for Implementing Real-Time APIs
Unreliable or slow performance can directly impact or even prevent the adoption
of new digital services, making it difficult for a business to maximize the
potential of new products and expand its offerings. Thus, it is not only crucial
that an API processes calls at acceptable speeds, but it is equally important to
have an API infrastructure in place that is able to route traffic to resources
correctly, authenticate users, secure APIs, prioritize calls, provide proper
bandwidth, and cache API responses. Most traditional APIM solutions were
made to handle traffic between servers in the data center and the client
applications accessing those APIs externally (north-south traffic). They also
need constant connectivity between the control plane and data plane, which
requires using third-party modules, scripts, and local databases. Processing a
single request creates significant overhead — and it only gets more complex when
dealing with the east-west traffic associated with a distributed
application. Considering that a single transaction or request could
require multiple internal API calls, the bank found it extremely difficult to
deliver good user experiences to their customers.
Building the foundations of effective data protection compliance
Data protection by design and default needs to be planned within the whole
system, depending on the type of data and how much data a business has. Data
classification is the categorization of data according to its level of
sensitivity or value, using labels. These are attached as visual markings and
metadata within the file. When classification is applied the metadata ensures
that the data can only be accessed or used in accordance with the rules that
correspond with its label. Businesses need to mitigate attacks and employee
mistakes by starting with policy - assessing who has access. Then they should
select a tool that fits the policy, not the other way round; you should never
be faced with selecting a tool and then having to rewrite your policy to fit
it. This will then support users with automation and labelling which will
enhance the downstream technology. Once data is appropriately classified,
security tools such as Data Loss Prevention (DLP), policy-based email
encryption, access control and data governance tools are exponentially more
effective, as they can access the information provided by the classification
label and metadata that tells them how data should be managed and protected.
Q&A on the Book Fail to Learn
People often fear failure because of the stakes associated with it. When we
create steep punishment systems and “one-strike-you’re-out” rules, it’s only
natural to be terrified of messing up. This is where we need to think more
like game designers. Games encourage trial and error because the cost of
starting over in a game is practically nothing. If I die playing Halo, I get
to respawn and try again immediately. We need to create more “respawn” options
in the rest of our lives. This is something that educators can do in their
course design. But it’s also something we can encourage as managers, company
leaders, or simply as members of society. The best way to do this is to start
talking more about our mistakes. These are things we should be able to
celebrate, laugh over, shake our collective heads at, and eventually grow
from. ... If we go back to people like Dyson and Edison, you see
failure-to-success ratios that reach five-thousand or even ten-thousand to
one. A venture capitalist who interviewed hundreds of CEOs arrived at the same
ratio for start-up companies making it big: about a 10,000:1
failure-to-success ratio. Now, we probably don’t need that many failures in
every segment of our lives, but think about how far off most of us are from
these numbers.
Quote for the day:
"Leaders need to be optimists. Their vision is beyond the present." -- Rudy Giuliani
No comments:
Post a Comment