A team of ex-Apple employees wants to replace smartphones with this AI projector
It's a seamless blend of technology and human interaction that Humane believes
can extend to daily schedule run-downs, seeing map directions, and receiving
visual aids for cooking or when fixing a car engine -- as suggested by the
company's public patents. The list goes on. Chaudhri also demoed the wearable's
voice translator which converted his English into French while using an
AI-generated voice to retain his tone and timbre, as reported by designer
Michael Mofina, who watched the recorded TED Talk before it was taken down.
Mofina also shared an instance when the wearable was able to recap the user's
missed notifications without sounding invasive, framing them as, "You got an
email, and Bethany sent you some photos." Perhaps the biggest draw to Humane and
its AI projector is the team behind it. That roster includes Chaudri, a former
Director of Design at Apple who worked on the Mac, iPod, iPhone, and other
prominent devices, and Bethany Bongiorno, also from Apple and was heavily
involved in the software management of iOS and MacOS.
Three issues with generative AI still need to be solved
Generative AI uses massive language models, it’s processor-intensive, and it’s
rapidly becoming as ubiquitous as browsers. This is a problem because existing,
centralized datacenters aren’t structured to handle this kind of load. They are
I/O-constrained, processor-constrained, database-constrained, cost-constrained,
and size-constrained, making a massive increase in centralized capacity unlikely
in the near term, even though the need for this capacity is going vertical.
These capacity problems will increase latency, reduce reliability, and over time
could throttle performance and reduce customer satisfaction with the result. The
need is for more of a more hybrid approach where the AI components necessary for
speed are retained locally (on devices) while the majority of the data resides
centrally to reduce datacenter loads and decrease latency. Without a hybrid
solution — where smartphones and laptops can do much of the work — use of the
technology is likely to stall as satisfaction falls, particularly in areas such
as gaming, translation, and conversations where latency will be most
annoying.
Exploring The Incredible Capabilities Of Auto-GPT
The first notable application is code improvement. Auto-GPT can read, write
and execute code and thus can improve its own programming. The AI can
evaluate, test and update code to make it faster, more reliable, and more
efficient. In a recent tweet, Auto-GPT’s developer, Significant Gravitas,
shared a video of the tool checking a simple example function responsible for
math calculations. While this particular example only contained a simple
syntax error, it still took the AI roughly a minute to correct the mistake,
which would have taken a human much longer in a codebase containing hundreds
or thousands of lines. ... The second notable application is in building an
app. Auto-GPT detected that Varun Mayya needed the Node.js runtime environment
to build an app, which was missing on his computer. Auto-GPT searched for
installation instructions, downloaded and extracted the archive, and then
started a Node server to continue with the job. While Auto-GPT made the
installation process effortless, Mayya cautions against using AI for coding
unless you already understand programming, as it can still make errors.
The Best (and Worst) Reasons to Adopt OpenTelemetry
Gathering telemetry data can be a challenge, and with OpenTelemetry now
handling essential signals like metrics, traces and logs, you might feel the
urge to save your company some cash by building your own system. As a
developer myself, I totally get that feeling, but I also know how easy it is
to underestimate the effort involved by just focusing on the fun parts when
kicking off the project. No joke, I’ve actually seen organizations assign
teams of 50 engineers to work on their observability stack, even though the
company’s core business is something else entirely. Keep in mind that data
collection is just a small part of what observability tools do these days. The
real challenge lies in data ingestion, retention, storage and, ultimately,
delivering valuable insights from your data at scale. ... At the very least,
auto-instrumentation will search for recognized libraries and APIs and then
add some code to indicate the start and end of well-known function calls.
Additionally, auto-instrumentation takes care of capturing the current context
from incoming requests and forwarding it to downstream requests.
OpenAI’s hunger for data is coming back to bite it
The Italian authority says OpenAI is not being transparent about how it
collects users’ data during the post-training phase, such as in chat logs of
their interactions with ChatGPT. “What’s really concerning is how it uses data
that you give it in the chat,” says Leautier. People tend to share intimate,
private information with the chatbot, telling it about things like their
mental state, their health, or their personal opinions. Leautier says it is
problematic if there’s a risk that ChatGPT regurgitates this sensitive data to
others. And under European law, users need to be able to get their chat log
data deleted, he adds. OpenAI is going to find it near-impossible to identify
individuals’ data and remove it from its models, says Margaret Mitchell, an AI
researcher and chief ethics scientist at startup Hugging Face, who was
formerly Google’s AI ethics co-lead. The company could have saved itself a
giant headache by building in robust data record-keeping from the start, she
says. Instead, it is common in the AI industry to build data sets for AI
models by scraping the web indiscriminately and then outsourcing the work of
removing duplicates or irrelevant data points, filtering unwanted things, and
fixing typos.
Executive Q&A: The State of Cloud Analytics
Many businesses are trying hard right now to stay profitable during these
times of economic uncertainty. The startling takeaway to us was that business
and technical leaders see cloud analytics as the tool -- not a silver bullet,
but a critical component -- for staying ahead of the pack in the current
economic climate. Not only that, organizations need to do more with less and,
as it turns out, cloud analytics is not only a wise investment during good
economic times, but also in more challenging economic times. Businesses reap
benefits from the same solution (cloud analytics) in either scenario. For
example, cloud analytics is typically more cost-effective than on-premises
analytics solutions because it eliminates the need for businesses to invest in
expensive hardware and IT infrastructure. It also offers the flexibility
businesses need to quickly experiment with new data sources, analytics tools,
and data models to get better insights -- without having to worry about the
underlying infrastructure.
AI vs. machine learning vs. data science: How to choose
It's a common topic for organizational leaders—they want to be able to
articulate the core differences between AI, machine learning (ML), and data
science (DS). However, sometimes they do not understand the nuances of each
and thus struggle to strategize their approach to things such as salaries,
departments, and where they should allocate their resources.
Software-as-a-Service (SaaS) and e-commerce companies specifically are being
advised to focus on an AI strategy without being told why or what that means
exactly. Understanding the complexity of the tasks you aim to accomplish will
determine where your company needs to invest. It is helpful to quickly outline
the core differences between each of these areas and give better context to
how they are best utilized. ... To decide whether your company needs to rely
on AI, ML, or data science, focus on one principle to begin: Identify the most
important tasks you need to solve and let that be your guide.
The strong link between cyber threat intelligence and digital risk protection
ESG defined cyber threat intelligence as, “evidence-based actionable knowledge
about the hostile intentions of cyber adversaries that satisfies one or
several requirements.” In the past, this definition really applied to data on
IoCs, reputation lists (e.g., lists of known bad IP addresses, web domains, or
files), and details on TTPs. The intelligence part of DRP is intended to
provide continuous monitoring of things like user credentials, sensitive data,
SSL certificates, or mobile applications, looking for general weaknesses,
hacker chatter, or malicious activities in these areas. For example, a
fraudulent website could indicate a phishing campaign using the organization’s
branding to scam users. The same applies for a malicious mobile app. Leaked
credentials could be for sale on the dark web. Bad guys could be exchanging
ideas for a targeted attack. You get the picture. It appears from the research
that the proliferation of digital transformation initiatives is acting as a
catalyst for threat intelligence programs. When asked why their organizations
started a CTI program, 38% said “as a part of a broader digital risk
protection effort in areas like brand reputation, executive protection,
deep/dark web monitoring, etc.”
4 perils of being an IT pioneer
An enterprise-wide IT project is deemed successful only when a team member at
the lowest level of the hierarchy adopts it. Ensuring adoption of any new
solution is always a challenge. More so a solution based on a new technology.
There’s push back from end users because they find the idea of losing power or
skills in the face of new technology disconcerting. For any IT leader,
crossing this mental inertia is always among the toughest challenges.
Moreover, IT leaders have seen many initiatives based on new technologies fail
because there was no buy-in from the company’s top leadership. Even if users
adopt the new technology, the initially learning curve is often steep,
impacting productivity. Most organizations can’t afford or aren’t ready to
accept the temporary revenue loss due to the disruption caused by the new
technology. Therefore, business and IT leaders must have a clear understanding
of the risk/reward principle when rolling out new tech. Buy-in from top
management as a top-down mandate can make adoption of new technology
easier.
Is Generative AI an Enterprise IT Security Black Hole?
Shutting the door on generative AI might not be a possibility for
organizations, even for the sake of security. “This is the new gold rush in
AI,” says Richard Searle, vice president of confidential computing at
Fortanix. He cited news of venture capital looking into this space along with
tech incumbents working on their own AI models. Such endeavors may make use of
readily available resources to get into the AI race fast. “One of the
important things about the way that systems like GPT-3 were trained is that
they also use common crawl web technology,” Searle says. “There’s going to be
an arms race around how data is collected and used for training.” That may
also mean increased demand for security resources as the technology floods the
landscape. “It seems like, as in all novel technologies, what’s happening is
the technology is racing ahead of the regulatory oversight,” he says, “both in
organizations and the governmental level.”
Quote for the day:
"Our chief want is someone who will
inspire us to be what we know we could be." --
Ralph Waldo Emerson
Superb
ReplyDelete