Cultivating cognitive liberty in the age of generative AI
Cognitive liberty is a pivotal component of human flourishing that has been
overlooked by traditional theories of liberty—primarily because we have taken
for granted that our brains and mental experiences are under our own control.
This assumption is being replaced with more nuanced understandings of the
human brain and its interaction with our environment, our interactions with
others, and our interdependence with technology. Cultivating cognitive liberty
in the digital age will become increasingly vital to enable humans to exercise
individual agency, nurture human creativity, discern fact and fiction, and
reclaim our critical thinking skills amid unprecedented cognitive
opportunities and risks. Generative AI tools like GPT-4 pose new challenges to
cognitive liberty, including the potential to interfere with and manipulate
our mental experiences. They can exacerbate biases and distortions that
undermine the integrity and reliability of the information we consume, in turn
influencing our beliefs, judgments, and decisions.
Smart homes, smart choices: How innovation is redefining home furnishing
Most notably, the advent of innovations has made shopping for furniture online
a far more enjoyable experience. It begins with options. Today, online
furniture websites provide customers with a vastly larger catalog of choices
than a brick-and-mortar school could imagine since there are no physical
constraints in the digital realm. But vast selections alone are just the
beginning. That’s why innovations like AR and VR are so important. Once
shoppers identify potential items, AR and VR allow them to view each piece
online. They can examine not just static images but pictures from all sides
and angles. They can personalize it to fit their style and home. ... First,
they understand various key factors, including the origin of the materials
being used, how they were made, the labor practices involved, potential
environmental impacts, and more. For Wayfair, we are leading the way by
including sustainability certifications on approved items as part of our Shop
Sustainably commitment. This shift is part of a larger movement called
conscious consumerism, where purchasing decisions are made based on those that
have positive social, economic, and environmental impacts.
A Guide to Model Composition
At its core, model composition is a strategy in machine learning that combines
multiple models to solve a complex problem that cannot be easily addressed by
a single model. This approach leverages the strengths of each individual
model, providing more nuanced analyses and improved accuracy. Model
composition can be seen as assembling a team of experts, where each member
brings specialized knowledge and skills to the table, working together to
achieve a common goal. Many real-world problems are too complicated for a
one-size-fits-all model. By orchestrating multiple models, each trained to
handle specific aspects of a problem or data type, we can create a more
comprehensive and effective solution. There are several ways to implement
model composition, including but not limited to: Sequential processing: Models
are arranged in a pipeline, where the output of one model serves as the input
for the next. ... Parallel processing: Multiple models run in parallel,
each processing the same input independently. Their outputs are then combined,
either by averaging, voting or through a more complex aggregation model, to
produce a final result.
Securing IoT devices is a challenging yet crucial task for CIOs: Silicon Labs CEO
Likewise, as IoT deployments expand, we’ll need scalable infrastructure and
solutions capable of accommodating growing device numbers and data volumes.
Many countries have their own nuanced regulatory compliance schemes, which add
another layer of complexity, especially for data privacy and security
regulations. Notably, in India, cost considerations, including initial
deployment costs and ongoing maintenance expenses, can be a barrier to
adoption, necessitating an understanding of return on investment. ... Silicon
Labs has played a key role in advancing IoT and AI adoption through
collaborations with industry and academia, including a recent partnership with
IIIT-H in India. In 2022, we launched India's first campus-wide Wi-SUN network
at the IIIT-H Smart City Living Lab, enabling remote monitoring and control of
campus street lamps. This network provides students and researchers with
hands-on experience in developing smart city solutions. Silicon Labs also
supports STEM education initiatives like Code2College to inspire innovation in
the IoT and AI fields.
Cyber resilience: A business imperative CISOs must get right
Often, organizations have more capabilities than they realize, but these
resources can be scattered throughout different departments. And each group
responsible for establishing cyber resilience might lack full visibility into
the existing capabilities within the organization. “Network and security
operations have an incredible wealth of intelligence that others would benefit
from,” Daniels says. Many companies are integrating cyber resilience into
their enterprise risk management processes. They have started taking proactive
measures to identify vulnerabilities, assess risks, and implement appropriate
controls. “This includes exposure assessment, regular validation such as
penetration testing, and continuous monitoring to detect and respond to
threats in real-time,” says Angela Zhao, director analyst at Gartner. ... The
rise of generative AI as a tool for hackers further complicates organization’s
resilience strategies. That’s because generative AI equips even low-skilled
individuals with the means to execute complex cyber attacks. As a result, the
frequency and severity of attacks might increase, forcing businesses to up
their game.
Is an open-source AI vulnerability next?
The challenges within the AI supply chain mirror those of the broader software
supply chain, with added complexity when integrating large language models
(LLMs) or machine learning (ML) models into organizational frameworks. For
instance, consider a scenario where a financial institution seeks to leverage
AI models for loan risk assessment. This application demands meticulous
scrutiny of the AI model’s software supply chain and training data origins to
ensure compliance with regulatory standards, such as prohibiting protected
categories in loan approval processes. To illustrate, let’s examine how a bank
integrates AI models into its loan risk assessment procedures. Regulations
mandate strict adherence to loan approval criteria, forbidding the use of
race, sex, national origin, and other demographics as determining factors.
Thus, the bank must consider and assess the AI model’s software and training
data supply chain to prevent biases that could lead to legal or regulatory
complications. This issue extends beyond individual organizations. The broader
AI technology ecosystem faces concerning trends.
Google’s call-scanning AI could dial up censorship by default, privacy experts warn
Google’s demo of the call scam-detection feature, which the tech giant said
would be built into a future version of its Android OS — estimated to run on
some three-quarters of the world’s smartphones — is powered by Gemini Nano,
the smallest of its current generation of AI models meant to run entirely
on-device. This is essentially client-side scanning: A nascent technology
that’s generated huge controversy in recent years in relation to efforts to
detect child sexual abuse material (CSAM) or even grooming activity on
messaging platforms. ... Cryptography expert Matthew Green, a professor at
Johns Hopkins, also took to X to raise the alarm. “In the future, AI models
will run inference on your texts and voice calls to detect and report illicit
behavior,” he warned. “To get your data to pass through service providers,
you’ll need to attach a zero-knowledge proof that scanning was conducted. This
will block open clients.” Green suggested this dystopian future of censorship
by default is only a few years out from being technically possible. “We’re a
little ways from this tech being quite efficient enough to realize, but only a
few years. A decade at most,” he suggested.
Data strategy? What data strategy?
A recent survey of UKI SAP users found that only 12 percent of respondents had
a data strategy that covers their entire organization - these are people who
are very likely to be embarking on tricky migrations to S/4HANA. Without
properly understanding and governing the data they’re migrating, they’re en
route to some serious difficulties. That’s because, more often than not, when
a digital transformation project is on the cards, data takes a back seat. In
the flurry of deadlines, testing, and troubleshooting, it feels so much more
important to get the infrastructure in place and deal with the data later. The
single goal is switching on the new system. Fixing the data flaws that caused
so many headaches with the old solution is rarely top of the list. But those
flaws and headaches are telling you something: your data needs serious
attention. Unless you take action, those data silos that slow down
decision-making and the data management challenges that are a blocker to
innovation will follow you to your new infrastructure.
Designing and developing APIs with TypeSpec
TypeSpec is in wide use inside Microsoft, having spread from its original home
in the Azure SDK team to the Microsoft Graph team, among others. Having two of
Microsoft’s largest and most important API teams using TypeSpec is a good sign
for the rest of us, as it both shows confidence in the toolkit and ensures
that the underlying open-source project has an active development community.
Certainly, the open-source project, hosted on GitHub, is very active. It
recently released TypeSpec 0.56 and has received over 2000 commits. Most of
the code is written in TypeScript and compiled to JavaScript so it runs on
most development systems. TypeSpec is cross-platform and will run anywhere
Node.js runs. Installation is done via npm. While you can use any programmer’s
editor to write TypeSpec code, the team recommends using Visual Studio Code,
as a TypeSpec extension for VS Code provides a language server and support for
common environment variables. This behaves like most VS Code language
extensions, giving you diagnostic tools, syntax highlights, and IntelliSense
code completion.
What’s holding CTOs back?
“Obviously, technology strategy and business strategy have to be ultimately
driven by the vision of the organization,’’ Jones says, “but it was surprising
that over a third of CTOs we surveyed felt they weren’t getting clear vision
and guidance.” The CTO role also means different things in different
organizations. “The CTO role is so diverse and spans everything from a CTO who
works for the CIO and is making the organization more efficient, all the way
to creating visibility for the future and transformations,’’ Jones says. ...
Plexus Worldwide’s McIntosh says internal politics and some level of
bureaucracy are unavoidable for CTOs seeking to push forward technology
initiatives. “Navigating and managing this within an organization requires a
balance of experience and influence to lessen any potential negative impact,’’
he says. Experienced leaders who have been with a company a long time “are
often skilled at understanding the intricate web of relationships, power
dynamics, and competing interests that shape internal politics and
bureaucratic hurdles,’’ McIntosh says.
Quote for the day:
"The leader has to be practical and a
realist, yet must talk the language of the visionary and the idealist." --
Eric Hoffer
No comments:
Post a Comment