Why artificial general intelligence lies beyond deep learning
Decision-making under deep uncertainty (DMDU) methods such as Robust
Decision-Making may provide a conceptual framework to realize AGI reasoning
over choices. DMDU methods analyze the vulnerability of potential alternative
decisions across various future scenarios without requiring constant
retraining on new data. They evaluate decisions by pinpointing critical
factors common among those actions that fail to meet predetermined outcome
criteria. The goal is to identify decisions that demonstrate robustness — the
ability to perform well across diverse futures. While many deep learning
approaches prioritize optimized solutions that may fail when faced with
unforeseen challenges (such as optimized just-in-time supply systems did in
the face of COVID-19), DMDU methods prize robust alternatives that may trade
optimality for the ability to achieve acceptable outcomes across many
environments. DMDU methods offer a valuable conceptual framework for
developing AI that can navigate real-world uncertainties. Developing a fully
autonomous vehicle (AV) could demonstrate the application of the proposed
methodology. The challenge lies in navigating diverse and unpredictable
real-world conditions, thus emulating human decision-making skills while
driving.
Bouncing back from a cyber attack
In the case of a cyber attack, the inconceivable has already happened – all
you can do now is bounce back. The big picture issue is that too often IoT
(internet of things) networks are filled with bad code, poor data practices,
lack of governance, and underinvestment in secure digital infrastructure. Due
to the popularity and growth of IoT, manufacturers of IoT devices spring up
overnight promoting products that are often constructed using lower-quality
components and firmware, which can have sometimes well-known vulnerabilities
exposed due to poor design and production practices. These vulnerabilities are
then introduced to a customer environment increasing risk and possibly
remaining unidentified. So, there’s a lot of work to do, including creating
visibility over deep, widely connected networks with a plethora of devices
talking to each other. All too often, IT and OT networks run on the same flat
network. For these organisations, many are planning segmentation projects, but
they are complex and disruptive to implement, so in the meantime companies
want to understand what's going on in these environments and minimise
disruption in the event of an attack.
Diversity, Equity, and Inclusion for Continuity and Resilience
As continuity professionals, the average age tends to skew older, so how do we
continue to bring new people to the fold to ensure they feel like they can
learn and be respected in the industry? Students need to be made aware this is
an industry they can step into. Unfortunately, many already have experience
seeing active shooter drills as the norm. They may have never organized one,
but they have participated in many of these drills in school. Why not take
advantage of that experience for the students who are interested in this
field? Taking their advice could make exercising like active shooter or
weather events less traumatic. Listening to their experience – doing it for at
least 13 years – gives them a lot of insight from even Millennials who grew up
at the forefront of school shootings, but not actively exercising what to do
if it happens while in school. These future colleagues’ insights could change
how we do specific exercises and events to benefit everyone. Still, there must
be openness to new and fresh ideas and treating them with validity instead
pushing them off due to their age and experience. Similarly, people with
disabilities have always been vocal about their needs.
AI’s pivotal role in shaping the future of finance in 2024 and beyond
As AI becomes more embedded in the financial fabric, regulators are crafting a
nuanced framework to ensure ethical AI use. The Reserve Bank of India (RBI)
and the Securities and Exchange Board of India (SEBI) have initiated
guidelines for responsible AI adoption, emphasising transparency,
accountability, and fairness in algorithmic decision-making processes. While
the benefits are palpable, challenges persist. The rapid pace of AI
integration demands a strategic approach to ensure a safe, financial
eco-system ... The evolving nature of jobs due to AI necessitates a concerted
effort towards upskilling the workforce. A McKinsey Global Institute report
indicates that approximately 46% of India’s workforce may undergo significant
changes in their job profiles due to automation and AI. To address this,
collaborative initiatives between the government, educational institutions,
and the private sector are imperative to equip the workforce with the
requisite skills for the future. ... The Reserve Bank of India (RBI) and the
Securities and Exchange Board of India (SEBI) have recognised the need for
ethical AI use in the financial sector. Establishing clear guidelines and
frameworks for responsible AI governance is crucial.
How to proactively prevent password-spray attacks on legacy email accounts
Often with an ISP it’s hard to determine the exact location from which a user
is logging in. If they access from a cellphone, often that geographic IP
address is in a major city many miles away from your location. In that case,
you may wish to set up additional infrastructure to relay their access through
a tunnel that is better protected and able to be examined. Don’t assume the
bad guys will use a malicious IP address to announce they have arrived at your
door. According to Microsoft, “Midnight Blizzard leveraged their initial
access to identify and compromise a legacy test OAuth application that had
elevated access to the Microsoft corporate environment. The actor created
additional malicious OAuth applications.” The attackers then created a new
user account to grant consent in the Microsoft corporate environment to the
actor-controlled malicious OAuth applications. “The threat actor then used the
legacy test OAuth application to grant them the Office 365 Exchange Online
full_access_as_app role, which allows access to mailboxes.” This is where my
concern pivots from Microsoft’s inability to proactively protect its processes
to the larger issue of our collective vulnerability in cloud
implementations.
How To Implement The Pipeline Design Pattern in C#
The pipeline design pattern in C# is a valuable tool for software engineers
looking to optimize data processing. By breaking down a complex process into
multiple stages, and then executing those stages in parallel, engineers can
dramatically reduce the processing time required. This design pattern also
simplifies complex operations and enables engineers to build scalable data
processing pipelines. ...The pipeline design pattern is commonly used in
software engineering for efficient data processing. This design pattern
utilizes a series of stages to process data, with each stage passing its
output to the next stage as input. The pipeline structure is made up of three
components: The source: Where the data enters the pipeline; The stages: Each
stage is responsible for processing the data in a particular way; The sink:
Where the final output goes Implementing the pipeline design pattern offers
several benefits, with one of the most significant benefits in efficiency of
processing large amounts of data. By breaking down the data processing into
smaller stages, the pipeline can handle larger datasets. The pattern also
allows for easy scalability, making it easy to add additional stages as
needed.
Accuracy Improves When Large Language Models Collaborate
Not surprisingly, this idea of group-based collaboration also makes sense with
large language models (LLMs), as recent research from MIT’s Computer Science
and Artificial Intelligence Laboratory (CSAIL) is now showing. In particular,
the study focused on getting a group of these powerful AI systems to work with
each other using a kind of “discuss and debate” approach, in order to arrive
at the best and most factually accurate answer. Powerful large language model
AI systems, like OpenAI’s GPT-4 and Meta’s open source LLaMA 2, have been
attracting a lot of attention lately with their ability to generate convincing
human-like textual responses about history, politics and mathematical
problems, as well as producing passable code, marketing copy and poetry.
However, the tendency of these AI tools to “hallucinate”, or come up with
plausible but false answers, is well-documented; thus making LLMs potentially
unreliable as a source of verified information. To tackle this problem, the
MIT team claims that the tendency of LLMs to generate inaccurate information
will be significantly reduced with their collaborative approach, especially
when combined with other methods like better prompt design, verification and
scratchpads for breaking down a larger computational task into smaller,
intermediate steps.
There's AI, and Then There's AGI: What You Need to Know to Tell the Difference
For starters, the ability to perform multiple tasks, as an AGI would, does not
imply consciousness or self-will. And even if an AI had self-determination,
the number of steps required to decide to wipe out humanity and then make
progress toward that goal is too many to be realistically possible. "There's a
lot of things that I would say are not hard evidence or proof, but are working
against that narrative [of robots killing us all someday]," Riedl said. He
also pointed to the issue of planning, which he defined as "thinking ahead
into your own future to decide what to do to solve a problem that you've never
solved before." LLMs are trained on historical data and are very good at using
old information like itineraries to address new problems, like how to plan a
vacation. But other problems require thinking about the future. "How does
an AI system think ahead and plan how to eliminate its adversaries when there
is no historical information about that ever happening?" Riedl asked. "You
would require … planning and look ahead and hypotheticals that don't exist yet
… there's this big black hole of capabilities that humans can do that AI is
just really, really bad at."
Metaverse and the future of product interaction
As the metaverse continues to evolve, so must the approach to product design.
This includes considering how familiar objects can be repurposed as functional
interface elements in a virtual environment. Additionally, understanding the
dynamics of group interactions in virtual spaces is crucial. Designers must
anticipate these trends and adapt their designs accordingly, ensuring that
products remain relevant and engaging in the ever-changing landscape of the
metaverse. In India, the metaverse presents significant opportunities for
businesses to redefine consumer experiences. It opens up possibilities for
more interactive, personalised, and adventurous engagements with customers.
This not only increases customer engagement and loyalty but also creates new
avenues for value exchange and revenue streams. The metaverse, with its
potential to impact diverse sectors like communications, retail,
manufacturing, education, and banking, is poised to be a game-changer in the
Indian market. ... As the metaverse continues to expand its reach and
influence, businesses and designers in India and around the world must evolve
to meet the demands of this new digital era.
Build trust to win out with genAI
Businesses need to adopt ‘responsible technology’ practices, which will give
them a powerful lever that enables them to deploy innovative genAI solutions
while building trust with consumers. Responsible tech is a philosophy that
aligns an organization’s use of technology to both individuals’ and society’s
interests. It includes developing tools, methodologies, and frameworks that
observe these principles at every stage of the product development cycle. This
ensures that ethical concerns are baked in at the outset. This approach is
gaining momentum, as people realize how technologies such as genAI, can impact
their daily lives. Even organizations such as the United Nations are codifying
their approach to responsible tech. Consumers urgently want organizations to
be responsible and transparent with their use of genAI. This can be a
challenge because, when it comes to transparency, there are a multitude of
factors to consider, including everything from acknowledging AI is being used
to disclosing what data sources are used, what the steps were taken to reduce
bias, how accurate the system is, or even the carbon footprint associated with
the genAI system.
Quote for the day:
"Entrepreneurs average 3.8 failures
before final success. What sets the successful ones apart is their amazing
persistence." -- Lisa M. Amos
No comments:
Post a Comment