The Promise of Analog AI
In neural networks, the most common operator is multiply-accumulate. You
multiply sets of numbers and then sum them up, as used in matrix multiplication
that’s the backbone of deep learning. If you store the inputs as arrays, you can
actually do this in one swoop by utilizing physical engineering laws (Ohm’s Law
to multiply, Kirchoff’s Law to sum) on a full matrix in parallel. This is the
crux of analog AI. If it was that easy, analog AI would already be used. Why
aren’t we using analog AI yet? ... Right now, analog AI works successfully for
multiply-accumulate operations. For other operations, it is still ideal to
provide their own circuitry, as programming nonvolatile memory devices takes
longer and results in faster wear and tear than traditional devices. Inference
does not typically require reprogramming these devices, since weights rarely
change. For training, however, they would require constant reconfiguration. In
addition, analog’s variability results in a mismatch between error in forward
propagation (inference) and backpropogation (calculating error during training).
This can cause issues during training.
Computing’s new logic: The distributed data cloud
A common pattern in analytic ecosystems today sees data produced in different
areas of the business pushed to a central location. The data flows into data
lakes and is cordoned in data warehouses, managed by IT personnel. The original
producers of the data, often subject-matter experts within the business domain,
effectively lose control or become layers removed from data meaningful to their
work. This separation diminishes the data’s value over time, with data diverted
away from its business consumers. Imagine a new model that flips this ecosystem
on its head by breaking down barriers and applying common standards everywhere.
Consider an analytics stack that could be deployed within a business domain; it
remains there, owned by team members in that business domain, but centrally
operated and supported by IT. What if all data products generated there were
completely managed within that domain? What if other business teams could simply
subscribe to those data products, or get API access to them? An organizational
pattern —data mesh — that promotes this decentralization of data product
ownership has received a great deal of attention recently.
New program bolsters innovation in next-generation artificial intelligence hardware
Based on use-inspired research involving materials, devices, circuits,
algorithms, and software, the MIT AI Hardware Program convenes researchers from
MIT and industry to facilitate the transition of fundamental knowledge to
real-world technological solutions. The program spans materials and devices, as
well as architecture and algorithms enabling energy-efficient and sustainable
high-performance computing. “As AI systems become more sophisticated, new
solutions are sorely needed to enable more advanced applications and deliver
greater performance,” says Daniel Huttenlocher, dean of the MIT Schwarzman
College of Computing and Henry Ellis Warren Professor of Electrical Engineering
and Computer Science. “Our aim is to devise real-world technological solutions
and lead the development of technologies for AI in hardware and software.” The
inaugural members of the program are companies from a wide range of industries
including chip-making, semiconductor manufacturing equipment, AI and computing
services, and information systems R&D organizations.
The Data Center of the Future
The data center of the future will have to be vendor agnostic. No matter the
hardware or underlying virtual machine or container technology, operating and
administration capabilities should be seamless. This flexibility enables
companies to streamline their deployment and maintenance processes and
prevents vendor lock-in. And because no cloud provider is present everywhere
in the world, the ideal data center should have the ability to run in any
environment in order to achieve the distribution requirements discussed above.
For that reason, new data centers will largely be made of open source
components in order to achieve such a level of interoperability. Distribution
and flexibility should not come at the expense of ease of use. Data centers
must allow for seamless cloud native capabilities, such as the ability to
scale computing and storage resources on demand, as well as API access for
integrations. While this is the norm for containers and virtual machines on
servers, the same capabilities should apply across environments, even for
remote devices such as IoT and edge servers.
Exchange Servers Speared in IcedID Phishing Campaign
The new campaign starts with a phishing email that includes a message about an
important document and includes a password-protected ZIP archive file
attached, the password for which is included in the email body. The email
seems extra convincing to users because it uses what’s called “thread
hijacking,” in which attackers use a portion of a previous thread from a
legitimate email found in the inbox of the stolen account. “By using this
approach, the email appears more legitimate and is transported through the
normal channels which can also include security products,” researchers wrote.
The majority of the originating Exchange servers that researchers observed in
the campaign appear to be unpatched and publicly exposed, “making the
ProxyShell vector a good theory,” they wrote. ProxyShell is a remote-code
execution (RCE) bug discovered in Exchange Servers last year that has since
been patched but has been throttled by attackers. Once unzipped, the attached
file includes a single “ISO” file with the same file name as the ZIP archive
that was created not that long before the email was sent.
Web3 and the future of data portability: Rethinking user experiences and incentives on the internet
Web3 offers many advantages. Namely, data flows freely and is publicly
verifiable. Companies no longer need to build user authentication using things
like passwords into their applications. Instead, users can have a single
account for the internet in their Web3 wallet: think of this as a
“bring-your-own-account” architecture where the user verifies their account as
they browse different websites, without the need to create a unique username
and password for every site. Because authentication is based on public-key
cryptography, certain security gaps with the Web2 approach to authentication
(e.g., weak passwords and password reuse) are nonexistent. Users don’t have to
remember passwords or fill out multiple screens when they sign up for an
application. As with everything in tech, there are disadvantages, too. Web3
eliminates the password, but it introduces other weaknesses. Anybody who has
tried to set up a Web3 wallet like MetaMask knows that the user experience
(UX) can be foreign and unfriendly.
Building a Culture of Full-Service Ownership
At its core, service ownership is about connecting humans to technologies and
services and understanding how they map to critical business outcomes.
Achieving this level of ownership requires an understanding of what and who
delivers critical business services. A clear understanding of what the
boundaries and dependencies are of a given service along with what value it
delivers is the starting point. And once it’s in production, a clear
definition of who is responsible for it at any given time and what its impact
is if it isn’t running optimally or, worst case, fails altogether. Empowering
developers with this information brings DevOps teams much closer to their
customers, the business and the value they create which, in turn, leads to
better application design and development. Building a culture around a
full-service ownership model keeps developers closely tied to the applications
they build and, therefore, closer to the value they deliver. Within the
organization, this type of ownership breaks down long-established centralized
and siloed teams into cross-functional full-service teams.
Strategies for Assessing and Prioritizing Security Risks Such as Log4j
After gaining full visibility, it’s not uncommon for organizations to see tens
of thousands of vulnerabilities across large production infrastructures.
However, a list of theoretical vulnerabilities is of little practical use. Of
all the vulnerabilities an enterprise could spend time fixing, it's important
to identify which are the most critical to the security of the application and
therefore must be fixed first. To be able to determine this, it's important to
understand the difference between a vulnerability, which is a weakness in
deployed software that could be exploited by attackers for particular result,
and exploitability, which indicates the presence of an attack path that can be
leveraged by an attacker to achieve a tangible gain. Vulnerabilities that
require high-privilege, local access in order to exploit are generally of
lesser concern because an attack path would be difficult to achieve for a
remote attacker. Of higher concern are vulnerabilities that can be triggered
by, for example, remote network traffic that would generally not be filtered
by firewall devices, and which are present on hosts that routinely receive
traffic directly from untrusted, internet sources.
Digital Forensics Basics: A Practical Guide for Kubernetes DFIR
Containerization has gone mainstream, and Kubernetes won out as the
orchestration leader. Building and operating applications this way provides
massive elasticity, scalability, and efficiency in an ever accelerating
technology world. Although DevOps teams have made great strides in harnessing
the new tools, the benefits don’t come without challenges and tradeoffs. Among
them is the question of how to perform a DFIR Kubernetes, extract all relevant
data, and clean up your systems when a security incident occurs in one of
these modern environments. ... Digital Forensics and Incident Response (DFIR)
is the cybersecurity field that includes the techniques and best practices to
adopt when an incident occurs focused on the identification, inspection, and
response to cyberattacks. Maybe you are familiar with DFIR on physical
machines or on information system hardware. Its guidelines are based on
carefully analyzing and storing the digital evidence of a security breach, but
also responding to attacks in a methodical and timely manner. All of this
minimizes the impact of an incident, reduces the attack surface, and prevents
future episodes.
Security tool guarantees privacy in surveillance footage
Privid allows analysts to use their own deep neural networks that are
commonplace for video analytics today. This gives analysts the flexibility to
ask questions that the designers of Privid did not anticipate. Across a
variety of videos and queries, Privid was accurate within 79 to 99 percent of
a non-private system. “We’re at a stage right now where cameras are
practically ubiquitous. If there's a camera on every street corner, every
place you go, and if someone could actually process all of those videos in
aggregate, you can imagine that entity building a very precise timeline of
when and where a person has gone,” says MIT CSAIL ... Privid introduces a new
notion of “duration-based privacy,” which decouples the definition of privacy
from its enforcement — with obfuscation, if your privacy goal is to protect
all people, the enforcement mechanism needs to do some work to find the people
to protect, which it may or may not do perfectly. With this mechanism, you
don’t need to fully specify everything, and you're not hiding more information
than you need to.
Quote for the day:
"Every great leader can take you back
to a defining moment when they decided to lead" --
John Paul Warren
No comments:
Post a Comment