Quote for the day:
“Being responsible sometimes means
pissing people off.” -- Colin Powell

One option that has many pros and cons is to use genAI models that explicitly
avoid training on any information that is legally dicey. There are a handful
of university-led initiatives that say they try to limit model training data
to information that is legally in the clear, such as open source or public
domain material. ... “Is it practical to replace the leading models of today
right now? No. But that is not the point. This level of quality was built on
just 32 ethical data sources. There are millions more that can be used,”
Wiggins wrote in response to a reader’s comment on his post. “This is a
baseline that proves that Big AI lied. Efforts are underway to add more data
that will bring it up to more competitive levels. It is not there yet.” Still,
enterprises are investing in and planning for genAI deployments for the long
term, and they may find in time that ethically sourced models deliver both
safety and performance. ... Tipping the scales in the other direction is the
big model makers’ promises of indemnification. Some genAI vendors have said
they will cover the legal costs for customers who are sued over content
produced by their models. “If the model provides indemnification, this is what
enterprises should shoot for,” Moor’s Andersen said.

One go-to pattern the team observed, called the “Associative Algorithm,”
essentially organizes nearby steps into groups and then calculates a final
guess. You can think of this process as being structured like a tree, where
the initial numerical arrangement is the “root.” As you move up the tree,
adjacent steps are grouped into different branches and multiplied together. At
the top of the tree is the final combination of numbers, computed by
multiplying each resulting sequence on the branches together. The other way
language models guessed the final permutation was through a crafty mechanism
called the “Parity-Associative Algorithm,” which essentially whittles down
options before grouping them. It determines whether the final arrangement is
the result of an even or odd number of rearrangements of individual digits.
... “These behaviors tell us that transformers perform simulation by
associative scan. Instead of following state changes step-by-step, the models
organize them into hierarchies,” says MIT PhD student and CSAIL affiliate
Belinda Li SM ’23, a lead author on the paper. “How do we encourage
transformers to learn better state tracking? Instead of imposing that these
systems form inferences about data in a human-like, sequential way, perhaps we
should cater to the approaches they naturally use when tracking state
changes.”

In the rapidly expanding realm of Decentralised Finance (DeFi), AI will play a
critical role in optimising complex lending, borrowing, and trading protocols.
AI can intelligently manage liquidity pools, optimise yield farming strategies
for better returns and reduced impermanent loss, and even identify subtle
arbitrage opportunities across various platforms. Crucially, AI will also be
vital in identifying and mitigating novel types of exploits that are unique to
the intricate and interconnected world of DeFi. Looking further ahead, AI will
be crucial in developing Quantum-Resistant Cryptography. As quantum computing
advances, it poses a theoretical threat to the underlying cryptographic
methods that secure current blockchain networks. AI can significantly
accelerate the research and development of “post-quantum cryptography” (PQC)
algorithms, which are designed to withstand the immense computational power of
future quantum computers. AI can also be used to simulate quantum attacks,
rigorously testing existing and new cryptographic designs for vulnerabilities.
Finally, the concept of Autonomous Regulation could redefine oversight in the
crypto space. Instead of traditional, reactive regulatory approaches,
AI-driven frameworks could provide real-time, proactive oversight without
stifling innovation.

CTEM shifts the focus from managing IT vulnerabilities in isolation to
managing exposure in collaboration, something that’s far more aligned with the
operational priorities of today’s organizations. Where traditional approaches
center around known vulnerabilities and technical severity, CTEM introduces a
more business-driven lens. It demands ongoing visibility, context-rich
prioritization, and a tighter alignment between security efforts and
organizational impact. In doing so, it moves the conversation from “What’s
vulnerable?” to “What actually matters right now?” – a far more useful
question when resilience is on the line. What makes CTEM particularly relevant
beyond security teams is its emphasis on continuous alignment between exposure
data and operational decision-making. This makes it valuable not just for
threat reduction, but for supporting broader resilience efforts, ensuring
resources are directed toward the exposures most likely to disrupt critical
operations. It also complements, rather than replaces, existing practices like
attack surface management (ASM). CTEM builds on these foundations with more
structured prioritization, validation, and mobilization, turning visibility
into actionable risk reduction.

Remember that in a Platform as a Product approach, developers are your
customers. If they don’t know what’s available, how to use it or what’s coming
next, they’ll find workarounds. These conferences and speaker series are a way
to keep developers engaged, improve adoption and ensure the platform stays
relevant.There’s a human side to this, too often left out of focusing on “the
business value” and outcomes in corporate-land: just having a friendly
community of humans who like to spend time with each other and learn. ...
Successful platform teams have active platform advocacy. This requires at
least one person working full time to essentially build empathy with your
users by working with and listening to the people who use your platforms. You
may start with just one platform advocate who visits with developer teams,
listening for feedback while teaching them how to use the platform and
associated methodologies. The advocate acts as both a councilor and delegate
for your developers. ... The journey to successful platform adoption is
more than just communicating technical prowess. Embracing systematic
approaches to platform marketing that include clear messaging and positioning
based on customers’ needs and a strong brand ethos is the key to communicating
the value of your platform.

“It’s not enough to know how a transformer model works; what matters is
knowing when and why to use AI to drive business outcomes,” says Scott Weller,
CTO of AI-powered credit risk analysis platform EnFi. “Developers need to
understand the tradeoffs between heuristics, traditional software, and machine
learning, as well as how to embed AI in workflows in ways that are practical,
measurable, and responsible.” ... “In AI-first systems, data is the product,”
Weller says. “Developers must be comfortable acquiring, cleaning, labeling,
and analyzing data, because poor data hygiene leads to poor model
performance.” ... AI safety and reliability engineering “looks at the
zero-tolerance safety environment of factory operations, where AI failures
could cause safety incidents or production shutdowns,” Miller says. To ensure
the trust of its customers, IFS needs developers who can build comprehensive
monitoring systems to detect when AI predictions become unreliable and
implement automated rollback mechanisms to traditional control methods when
needed, Miller says. ... “With the rapid growth of large language models,
developers now require a deep understanding of prompt design, effective
management of context windows, and seamless integration with LLM APIs—skills
that extend well beyond basic ChatGPT interactions,” Tupe says.

Something worth noting about increased AI usage in supply chains is that as
AI-enabled systems become more complex, they also become more delicate, which
increases the potential for outages. Something as simple as a single
misconfiguration or unintentional interaction between automated security gates
can lead to a network outage, preventing supply chain personnel from accessing
critical AI applications. During an outage, AI clusters (interconnected
GPU/TPU nodes used for training and inference) can also become unavailable. ..
Businesses must increase network resiliency to ensure their supply chain and
logistics teams always have access to key AI applications, even during network
outages and other disruptions. One approach that companies can take to
strengthen network resilience is to implement purpose-built infrastructure
like out of band (OOB) management. With OOB management, network administrators
can separate and containerize functions of the management plane, allowing it
to operate freely from the primary in-band network. This secondary network
acts as an always-available, independent, dedicated channel that
administrators can use to remotely access, manage, and troubleshoot network
infrastructure.
In some cases, the pace of change is so fast that buildings are being
retrofitted even as they are being constructed. Once CPUs are installed,
O'Rourke has observed data center owners opting to upgrade racks row by row,
rather than converting the entire facility to liquid cooling at once – largely
because the building wasn’t originally designed to support higher-density
racks. To accommodate this reality, Tate carries out in-row upgrades by
providing specialized structures to mount manifolds, which distribute coolant
from air-cooled chillers throughout the data halls. “Our role is to support
the physical distribution of that cooling infrastructure,” explains O'Rourke.
“Manifold systems can’t be supported by existing ceilings or hot aisle
containment due to weight limits, so we’ve developed floor-mounted frameworks
to hold them.” He adds: “GPU racks also can’t replace all CPU racks
one-to-one, as the building structure often can’t support the added load.
Instead, GPUs must be strategically placed, and we’ve created solutions to
support these selective upgrades.” By designing manifold systems with
actuators that integrate with the building management system (BMS), along with
compatible hot aisle containment and ceiling structures, Tate has developed a
seamless, integrated solution for the white space.

At first, personalization was a way to improve “stickiness” by keeping users
engaged longer, returning more often and interacting more deeply with a site
or service. Recommendation engines, tailored ads and curated feeds were all
designed to keep our attention just a little longer, perhaps to entertain but
often to move us to purchase a product. But over time, the goal has expanded.
Personalization is no longer just about what holds us. It is what it knows
about each of us, the dynamic graph of our preferences, beliefs and behaviors
that becomes more refined with every interaction. Today’s AI systems do not
merely predict our preferences. They aim to create a bond through highly
personalized interactions and responses, creating a sense that the AI system
understands and cares about the user and supports their uniqueness. The tone
of a chatbot, the pacing of a reply and the emotional valence of a suggestion
are calibrated not only for efficiency but for resonance, pointing toward a
more helpful era of technology. It should not be surprising that some people
have even fallen in love and married their bots. The machine adapts not just
to what we click on, but to who we appear to be. It reflects us back to
ourselves in ways that feel intimate, even empathic.

Multiple different hackers are launching attacks through the Microsoft
vulnerability, according to representatives of two cybersecurity firms,
CrowdStrike Holdings, Inc. and Google's Mandiant Consulting. Hackers have
already used the flaw to break into the systems of national governments in
Europe and the Middle East, according to a person familiar with the matter. In
the US, they've accessed government systems, including ones belonging to the
US Department of Education, Florida's Department of Revenue and the Rhode
Island General Assembly, said the person, who spoke on condition that they not
be identified discussing the sensitive information. ... The breaches have
drawn new scrutiny to Microsoft's efforts to shore up its cybersecurity after
a series of high-profile failures. The firm has hired executives from places
like the US government and holds weekly meetings with senior executives to
make its software more resilient. The company's tech has been subject to
several widespread and damaging hacks in recent years, and a 2024 US
government report described the company's security culture as in need of
urgent reforms. ... "There were ways around the patches," which enabled
hackers to break into SharePoint servers by tapping into similar
vulnerabilities, said Bernard. "That allowed these attacks to
happen."
No comments:
Post a Comment