What’s the buzz around AGI safety
Specification in AGI systems defines a system’s goal and makes sure it aligns
with the human developer’s intentions and motives. These systems follow a
pre-specified algorithm that allows them to learn from data, which helps them to
achieve a specific goal. Meanwhile, both the learning algorithm and the goal are
given by the human designer—for example, goals like minimising a prediction
error or maximising a reward. During training, the system will try to complete
the objective, irrespective of how it reflects on the designer’s intent. Hence,
designers should take special care and clarify an objective that will lead to
the desired or optimal behaviour. If the goal is a poor proxy for the intended
behaviour, the system will learn the wrong behaviour and consider it as
“misspecified.” This is a likely outcome where the specified goal does not align
with the desired behaviour. In order to adhere to AGI safety, the system
designer must understand why it behaves the way it does and will it ever align
with that of the designer. A robust set of assurance techniques has already
existed in old-gen systems.
Google Cloud CISO Phil Venables On 8 Hot Cybersecurity Topics
“A different environment compared to my career in financial services — many
things the same, but many things different, especially the scale of what we do
and our ability to invest even more in security than even some of the largest
banks are able to invest,” Venables said. Google integrated its risk, security,
compliance and privacy teams from across the company into the Google
Cybersecurity Action Team announced last October. The consolidated team will
provide strategic security advisory services, trust and compliance support,
customer and solutions engineering, and incident response capabilities. “Those
were all teams that were doing really, really good stuff, but we thought it made
sense for them to be part of one integrated organization for cloud given the
importance of all four of those topics, making sure that we provide even more
focus on those things together,” Venable said. “That’s working out very well,
and I think that’s reflected in a lot of large organizations that are aligning
their risk compliance, security and privacy teams because of a lot of the
commonality between the types of controls that you have to implement to drive
those things effectively.”
Real-Time Policy Enforcement with Governance as Code
Cloud governance as code encourages collaboration and promotes agility. Through
this approach, development, operation, security and finance teams can gain
visibility into policies, and they can collaborate more effectively on policy
definition and enforcement. Teams can quickly and efficiently modify policies
and create new policies, and changes can be implemented in much the same way
teams modify application code or underlying infrastructure in today’s agile,
DevOps environments. ... Governance as code is emerging as a foundational
requirement for organizations scaling operations in the cloud. It champions
automated management of the complex cloud ecosystem via a human-readable,
declarative, high-level language. Infrastructure and security engineering teams
can adopt governance as code to enforce policies in an agile, flexible and
efficient manner while reducing developer friction. With governance as code,
developers can avoid the obstacles that often hinder or discourage cloud
adoption altogether, allowing for greater automation of and visibility into an
organization’s cloud infrastructure, unifying teams in their greater mission to
achieve success.
Leveraging machine learning to find security vulnerabilities
Code security vulnerabilities can allow malicious actors to manipulate software
into behaving in unintended and harmful ways. The best way to prevent such
attacks is to detect and fix vulnerable code before it can be exploited.
GitHub’s code scanning capabilities leverage the CodeQL analysis engine to find
security vulnerabilities in source code and surface alerts in pull requests –
before the vulnerable code gets merged and released. To detect vulnerabilities
in a repository, the CodeQL engine first builds a database that encodes a
special relational representation of the code. On that database we can then
execute a series of CodeQL queries, each of which is designed to find a
particular type of security problem. Many vulnerabilities are caused by a single
repeating pattern: untrusted user data is not sanitized and is subsequently
accidentally used in an unsafe way. For example, SQL injection is caused by
using untrusted user data in a SQL query, and cross-site scripting occurs as a
result of untrusted user data being written to a web page.
AI Is Helping Scientists Explain the Brain
A raging debate that erupted recently in the field of decision-making highlights
these difficulties. It started with controversial findings of a 2015 paper in
Science that compared two models for how the brain makes decisions, specifically
perceptual ones.3 Perceptual decisions involve the brain making judgments about
what sensory information it receives: Is it red or green? Is it moving to the
right or to the left? Simple decisions, but with big consequences if you are at
a traffic stop. To study how the brain makes them, researchers have been
recording the activity of groups of neurons in animals for decades. When the
firing rate of neurons is plotted and averaged over trials, it gives the
appearance of a gradually rising signal, “ramping up” to a decision. ... In the
standard narrative based on an influential model that has been around since the
1990s, the ramp reflects the gradual accumulation of evidence by neurons. In
other words, that is how neurons signal a decision: by increasing their firing
rate as they collect evidence in favor of one choice or the other until they are
satisfied.
What does the future of artificial intelligence look like within the life sciences?
The biggest hurdle for scientists is being able to more regularly adopt and
implement the infrastructure and existing tools needed to run their lab using
AI. This is especially true for open-ended research - or when scientists don't
have a predefined notion of what experiments will need to happen in what steps
to reach for the desired outcome. The current infrastructure for managing lab
data was largely set up in the image of lab notebooks. Many companies are
tackling this problem by trying to retrofit data generated in this model to fit
the structure required for more in-depth data analysis. At ECL, we’ve tackled
this problem by proceduralizing the lab activities themselves, as well as the
storage of the data encompassing those activities. In this way, data is
comprehensive, organized, reproducible, and ready to be deployed into any given
analysis model. ... As scientists and companies recognize the reproducibility
and trustworthiness of data generated in a cloud lab like ECL, their focus will
shift away from concern over laboratory operations and logistics and more
towards the science itself.
From The Great Resignation To The Great Return: Bringing Back The Workforce
The biggest challenge is putting enormous pressure on employees who don’t want
to leave their job. Since talent leaders can’t fill open roles fast enough,
employees that want to stay have had to take on the employment of multiple
people in addition to their day-to-day responsibilities. In addition to that,
it’s a candidate’s market, and job seekers have many job options and often have
multiple offers. As a result, companies have to make hiring decisions faster and
offer better benefits to attract talent and stand out among other companies.
Another challenge, according to Cassady, is that employees are missing key
connections points in this remote environment. “We have found that some of the
key factors in retaining your workforce are that people need to feel connected
to the company’s mission, the company’s leaders, and a connection to the team
they work with.” In addition, she adds, “Talent leaders must continue to create
communities within their company to retain their employees.”
The new rules of succession planning
First, start with the what and not the who. Doing so will lay out a more
realistic and substantive framework. Second, from this vantage point, try to
explicitly minimize the noise in the boardroom. Ensure that the directors are
using shared, contextual definitions of core jargon, such as strategy, agility,
transformation, and execution. Third, root the follow-on analyses of the
candidates in that shared understanding, and base any assessments on a factual
evaluation of their track records and demonstrated potential in order to
minimize the bias of the decision-makers themselves. Many companies sidestep
this hard work when developing their short list of candidates and rely instead
on familiar paths: the CEO may have preferred candidates, or a search firm or
industrial psychologist may have been asked to draft an ideal role profile or a
set of competencies to prescreen internal and external candidates. This
overemphasis on profiling the who of the next CEO triggers two failure points.
It leans right into “great leader” biases (the notion that the right person will
single-handedly solve all the company’s problems).
IT jobs: 7 hot automation skills in 2022
“One of the most important approaches to automation is infrastructure as code,”
says Chris Nicholson, head of the AI team at Clipboard Health. “Infrastructure
as code makes it easier to spin up and manage large clusters of compute, which
in turn makes it easier to introduce new products and features quickly, and to
scale in response to demand.” Kelsey Person, senior project manager at the
recruiting firm LaSalle Network, agrees: Experience with infrastructure as code
pops on a resume right now, because it indicates the knowledge and ability
needed to help drive significant automation initiatives elsewhere. “One skill we
are seeing in more demand is knowledge of DevOps tools, namely Ansible,” Person
says. “It can help organizations automate and simplify tasks and can save time
when developers and DevOps professionals are installing packages or configuring
many servers.” The ability to write homegrown automation scripts is a mainstay
of automation-centric jobs – it’s essentially the skill that never goes out of
style, even as a wider range of tooling enables non-developers to automate some
previously manual processes.
Why cloud-based cellular location is the solution to supply chain disruption
Cloud-based cellular location leveraging 5G, in combination with seamless
roaming integrated into a WAN, provides highly accurate end-to-end visibility,
starting with sub-metre accuracy on the factory floor with private networks and
extending to outdoor locations whenever and wherever an asset is transported,
from the beginning to end of a supply chain. Cloud-based cellular location
technologies are already in use today, leveraging ubiquitous 4G/5G networks for
massive IoT asset tracking applications. Their adoption is expected to increase
significantly and broaden to more and more critical IoT use cases as well.
According to ABI Research, overall penetration of the cloud-based cellular
location installed base will reach 42% by 2026. In this period, it’s estimated
that there’ll be a four-fold increase in penetration driven largely by devices
on Cat-1, Cat-M, and NB-IoT networks. Asset tracking will be the main driver of
growth on these networks, as cloud-based cellular location becomes more
important for driving down costs. Cloud-based cellular location can enable
enterprises to unlock opportunities for critical IoT, and will help
revolutionise supply chain management.
Quote for the day:
"Leadership is the wise use of power.
Power is the capacity to translate intention into reality and sustain it."
-- Warren Bennis
No comments:
Post a Comment