Meta puts the ‘Dead Internet Theory’ into practice
In the old days, when Meta was called Facebook, the company wrapped every new
initiative in the warm metaphorical blanket of “human connection”—connecting
people to each other. Now, it appears Meta wants users to engage with anyone or
anything—real or fake doesn’t matter, as long as they’re “engaging,” which is to
say spending time on the platforms and money on the advertised products and
services. In other words, Meta has so many users that the only way to continue
its previous rapid growth is to build users out of AI. The good news is that
Meta’s “Dead Internet” projects are not going well. ... Meta is testing a
program called “Creator AI,” which enables influencers to create AI-generated
bot versions of themselves. These bots would be designed to look, act, sound,
and write like the influencers who made them, and would be trained on the
wording of their posts. The influencer bots would engage in interactive direct
messages and respond to comments on posts, fueling the unhealthy parasocial
relationships millions already have with celebrities and influencers on Meta
platforms. The other “benefit” is that the influencers could “outsource” fan
engagement to a bot. ... “We expect these AIs to actually, over time, exist on
our platforms, kind of in the same way that accounts do,” Connor Hayes, vice
president of product for generative AI at Meta, said
Experts Highlight Flaws within Government’s Data Request Mandate Under DPDP Rules 2025
Tech Lawyer Varun Sen Bahl also points out the absence of an appellate mechanism
for such ‘calls for information’ by the Central government, explaining that such
an appeal process only extends against orders of the Data Protection Board. He
explains, “This is problematic because it leaves Data Fiduciaries and Data
Principals with no clear recourse against excessive data collection requests
made under Section 36 read with Rule 22“. Bahl also notes that the provision
lacks specific mention of guardrails like the European Union’s data minimisation
principle under the General Data Protection Regulation (GDPR) while furnishing
such information requests. ... Roy argues that the compliance burdens on Data
Fiduciaries will increase and aggravate through sweeping requests and by
invoking the non-disclosure clause. To explain, he cites the case of the
Razorpay-AltNews situation in 2022, when the Government accessed the names and
transaction details of the news platform’s donors via Razorpay ... To ensure
that government officers and agencies don’t abuse this provision, Roy explains
that “Fiduciaries must [as part of corporate governance] give periodic reports
of the number of such demands”. Similarly, law enforcement and other agencies
should also submit periodic reports of such requests to the Data Protection
Board comprising details of cases where the non-disclosure clause is invoked.
How Edge Computing can Give OEMs a Competitive Advantage
Latency matters in warehouse automation too. Performing predictive maintenance
on a shoe sorter, for example, could require real-time monitoring of actuators
that do diversions every 40 milliseconds. Component-level computing power allows
the system to respond to changing conditions with speed and efficiency levels
that simply wouldn’t be possible with a cloud-based system. ... Edge components
can also communicate with a system’s programmable logic controllers (PLCs),
making their data immediately available to end users. Supporting software on the
customer’s local network interprets this information, enabling predictive
maintenance and other real-time insights while tracking historical trends over
time. ... Edge technology enables you to build assets that deliver higher
utilization to your customers. Much of this benefit comes from the greater
efficiencies of predictive maintenance. Users have less downtime because
unnecessary service is reduced or eliminated, and many problems can be resolved
before they cause unplanned shutdowns. Smart components can also deliver more
process consistency. Ordinarily, parts degrade over time, gradually losing
speed and/or power. With edge capabilities, they can continuously adapt to
changing conditions, including varying parcel weights and normal wear.
Have we reached the end of ‘too expensive’ for enterprise software?
LLMs are now changing the way companies approach problems that are difficult or
impossible to solve algorithmically, although the term “language” in Large
Language Models is misleading. ... GenAI enables a variety of features that were
previously too complex, too expensive, or completely out of reach for most
organizations because they required investments in customized ML solutions or
complex algorithms. ... Companies need to recognize generative AI for what it
is: a general-purpose technology that touches everything. It will become part of
the standard software development stack, as well as an integral enabler of new
or existing features. Ensuring the future viability of your software development
requires not only acquiring AI tools for software development but also preparing
infrastructure, design patterns and operations for the growing influence of AI.
As this happens, the role of software architects, developers, and product
designers will also evolve. They will need to develop new skills and strategies
for designing AI features, handling non-deterministic outputs, and integrating
seamlessly with various enterprise systems. Soft skills and collaboration
between technical and non-technical roles will become more important than ever,
as pure hard skills become cheaper and more automatable.
Is prompt engineering a 'fad' hindering AI progress?
Motivated by the belief that "a well-crafted prompt is essential for obtaining
accurate and relevant outputs from LLMs," aggressive AI users -- such as
ride-sharing service Uber -- have created whole disciplines around the topic.
And yet, there is a reasoned argument to be made that prompts are the wrong
interface for most users of gen AI, including experts. "It is my professional
opinion that prompting is a poor user interface for generative AI systems, which
should be phased out as quickly as possible," writes Meredith Ringel Morris,
principal scientist for Human-AI Interaction for Google's DeepMind research
unit, in the December issue of computer science journal Communications of the
ACM. Prompts are not really "natural language interfaces," Morris points out.
They are "pseudo" natural language, in that much of what makes them work is
unnatural. ... In place of prompting, Morris suggests a variety of approaches.
These include more constrained user interfaces with familiar buttons to give
average users predictable results; "true" natural language interfaces; or a
variety of other "high-bandwidth" approaches such as "gesture interfaces,
affective interfaces (that is, mediated by emotional states),
direct-manipulation interfaces
Building Resilience Into Cyber-Physical Systems Has Never Been This Mission-Critical
In our quest for cyber resilience, we sometimes—mistakenly—fixate on
hypothetical doomsday scenarios. While this apocalyptic and fear-based thinking
can be an instinctual response to the threats we face, it is not realistic or
helpful. Instead, we must champion the progress, even incremental, that is
achievable through focused, pragmatic measures—like cyber insurance. By
reframing discussions around tangible outcomes such as financial stability and
public safety, we can cultivate a clearer sense of priorities. Regulatory
frameworks may eventually align incentives towards better cybersecurity
practices, but in the interim, transferring risk via a measure like cyber
insurance offers a potent mechanism to enhance visibility into risk mitigation
strategies and implement better cyber hygiene accordingly. By quantifying
potential losses and incentivizing proactive security measures, cyber insurance
can catalyze a necessary, and overdue cultural shift towards resilience-oriented
practices—and a safer world. We stand at a pivotal moment in American critical
infrastructure cybersecurity. As hackers threaten to sabotage our vital systems
for ransom, the financial damages ensued from incidents like Halliburton oblige
us to stay alert and act proactively.
Don't Fall Into the 'Microservices Are Cool' Trap and Know When to Stick to Monolith Instead
Over time, as monolith applications become less and less maintainable, some
teams decide that the only way to solve the problem is to start refactoring by
breaking their application into microservices. Other teams make this decision
just because "microservices are cool." This process takes a lot of time and
sometimes brings even more maintenance overhead. Before going into this, it's
crucial to carefully consider all the pros and cons and ensure you've reached
your current monolith architecture limits. And remember, it is easier to break
than to build. ... As you see, the modular monolith is the way to get the best
from both worlds. It is like running independent microservices inside a single
monolith but avoiding collateral microservices overhead. One of the limitations
you may have – is not being able to scale different modules independently. You
will have as many monolith instances as required by the most loaded module,
which may lead to excessive resource consumption. The other drawback is the
limitations of using different technologies. ... When running a monolith
application, you can usually maintain a simpler infrastructure. Options like
virtual machines or PaaS solutions (such as AWS EC2) will suffice. Also, you can
handle much of the scaling, configuration, upgrades, and monitoring manually or
with simple tools.
SEC rule confusion continues to put CISOs in a bind a year after a major revision
“There is so much fear out there right now because there is a lack of
clarity,” Sullivan told CSO. “The government is regulating through enforcement
actions, and we get incomplete information about each case, which leads to
rampant speculation.” As things stand, CISOs and their colleagues must chart a
tricky course in meeting reporting requirements in the event of a cyber
security incident or breach, Shusko says. That means anticipating the need to
deal with reporting requirements by making compliance preparation part of any
incident response plan, Shusko says. If they must make a cyber incident
disclosure, companies should attempt to be compliant and forthcoming while
seeking to avoid releasing information that could inadvertently point towards
unresolved security shortcomings that future attackers might be able to
exploit. ... Given that clarity around disclosure isn’t always
straightforward, there is no real substitute for preparedness, and that makes
it essential to practise situations that would require disclosure through
tabletops and other exercises, according to Simon Edwards, chief exec of
security testing firm SE Labs. “Speaking as someone who is invested heavily in
the security of my company, I’d say that the most obvious and valuable thing a
CISO can do is roleplay through an incident.”
How adding capacity to a network could reduce IT costs
Have you heard the phrase “bandwidth economy of scale?” It’s a sophisticated
way of saying that the cost per bit to move a lot of bits is less than it is
to move a few. In the decades that information technology evolved from punched
cards to PCs and mobile devices, we’ve taken advantage of this principle by
concentrating traffic from the access edge inward to fast trunks. ...
Higher capacity throughout the network means less congestion. It’s old-think,
they say, to assume that if you have faster LAN connections to users and
servers, you’ll admit more traffic and congest trunks. “Applications determine
traffic,” one CIO pointed out. “The network doesn’t suck data into it at the
interface. Applications push it.” Faster connections mean less congestion,
which means fewer complaints, and more alternate paths to take without traffic
delay and loss, which also reduces complaints. In fact, anything that creates
packet loss, outages, even latency, creates complaints, and addressing
complaints is a big source of opex. The complexity comes in because network
speed impacts user/application quality of experience in multiple ways, ways
beyond the obvious congestion impacts. When a data packet passes through a
switch or router, it’s exposed to two things that can delay it.
Ephemeral environments in cloud-native development
An emerging trend in cloud computing is using ephemeral environments for
development and testing. Ephemeral environments are temporary, isolated spaces
created for specific projects. They allow developers to swiftly spin up an
environment, conduct testing, and then dismantle it once the task is complete.
... At first, ephemeral environments sound ideal. The capacity for rapid
provisioning aligns seamlessly with modern agile development philosophies.
However, deploying these spaces is fraught with complexities that require
thorough consideration before wholeheartedly embracing them. ... The initial
setup and ongoing management of ephemeral environments can still incur
considerable costs, especially in organizations that lack effective automation
practices. If one must spend significant time and resources establishing these
environments and maintaining their life cycle, the expected savings can
quickly diminish. Automation isn’t merely a buzzword; it requires investment
in tools, training, and sometimes a cultural shift within the organization.
Many enterprises may still be tethered to operational costs that can
potentially undermine the presumed benefits. This seems to be a systemic issue
with cloud-native anything.
Quote for the day:
"The best leader brings out the best
in those he has stewardship over." -- J. Richard Clarke
No comments:
Post a Comment