Quote for the day:
"The only way to achieve the impossible is to believe it is possible." -- Charles Kingsleigh
When will browser agents do real work?
Vision-based agents treat the browser as a visual canvas. They look at
screenshots, interpret them using multimodal models, and output low-level
actions like “click (210,260)” or “type “Peter Pan”.” This mimics how a human
would use a computer—reading visible text, locating buttons visually, and
clicking where needed. ... DOM-based agents, by contrast, operate directly on
the Document Object Model (DOM), the structured tree that defines every webpage.
Instead of interpreting pixels, they reason over textual representations of the
page: element tags, attributes, ARIA roles, and labels. ... Running a browser
agent once successfully doesn’t mean it can repeat the task reliably. The next
frontier is learning from exploration: transforming first-time behaviors into
reusable automations. A promising strategy starting to be deployed more and more
is to let agents explore workflows visually, then encode those paths into
structured representations like DOM selectors or code. ... With new large
language models excelling at writing and editing code, these agents can
self-generate and improve their own scripts, creating a cycle of
self-optimization. Over time, the system becomes similar to a skilled worker:
slower on the first task, but exponentially faster on repeat executions. This
hybrid, self-improving approach—combining vision, structure, and code
synthesis—is what makes browser automation increasingly robust. Security Degradation in AI-Generated Code: A Threat Vector CISOs Can’t Ignore
LLMs have been a boon for developers since OpenAI’s ChatGPT was publicly
released in November 2022, followed by other AI models. Developers were quick to
utilize the tools, which significantly increased productivity for overtaxed
development teams. However, that productivity boost came with security concerns,
such as AI models trained on flawed code from internal or publicly available
repositories. Those models introduced vulnerabilities that sometimes spread
throughout the entire software ecosystem. One way to address the problem was by
using LLMs to make iterative improvements to code-level security during the
development process, under the assumption that LLMs, given the job of correcting
mistakes, would amend them. The study, however, turns that assumption on its
head. Although previous studies (and extensive real-world experience, including
our own data) have demonstrated that an LLM can introduce vulnerabilities in the
code it generates, this study went a step further, finding that iterative
refinement of code can introduce new errors. ... The security degradation
introduced in the feedback loop raises troubling questions for developers, tool
designers and AI safety researchers. The answer to those questions, the authors
write, involves human intervention. Developers, for instance, must maintain
control of the development process, viewing AI as a collaborative assistant
rather than an autonomous tool. Are We in the Quantum Decade?
It would be prohibitively expensive even for a Fortune 100 company to own,
operate and maintain its own quantum computer. It would require a quantum
ecosystem that includes government, academia and industry entities to make it
accessible to an enterprise. In most cases, the push and funding could come from
the government or through cooperation among nations. Historically, new computing
technology was rented and used as a service. Compute resources financed by
government were booked in advance. Processing occurred in batches using
resource-sharing techniques such as time slicing. Equivalent models are expected
for quantum processing. ... The era of quantum computing looms large, but
enterprises and IT teams should be thinking about it today. Infrastructure needs
to be deployed and algorithms need to be written for executing business use
cases. "For several years to come, CIOs may not have much to do with quantum
computing. But they need to know what it is, what it can do and how much it
costs," said Lawrence Gasman, president of Communications Industry Researchers.
"Quantum networks and cybersecurity will become necessary for secure
communications by 2030 or even earlier." Quantum computing will not replace
classical computing, but data center providers need to be thinking about how
they will integrate the two architectures using interconnects like co-packaged
optics.When Data Gravity Meets Disaster Recovery
Data starts to pull everything else toward it: apps, analytics, integrations,
even people and processes, the more it aggregates in one place. That environment
becomes a tightly woven web of dependencies, over time. While it may be fine for
day-to-day operations, it becomes a nightmare when something breaks. At that
point, DR turns into a delicate task of relocating an entire ecosystem, not just
a matter of simply copying files. You have to think about relationships, which
systems rely on which datasets, how permissions are mapped, and how applications
expect to find what they need. Of course, the bigger that web gets, the heavier
the “gravitational field.” Moving petabytes of interconnected data across
regions or clouds isn’t fast or easy. It takes time, bandwidth, and planning,
and every extra gigabyte adds friction – in other words, the more gravity your
data has, the harder it is to recover from disaster quickly. ... To push back
against gravity, organizations are rethinking their architectures. Instead of
forcing all data into one environment, they’re distributing it intelligently,
keeping mission-critical workloads close to where they’re created, while
replicating copies to nearby or complementary environments for protection.
Hybrid and multi-cloud DR strategies have become the go-to solution for this.
They blend the best of both worlds: the low-latency performance of local
infrastructure with the flexibility and geographic reach of cloud storage.
What’s Driving the EU’s AI Act Shake-Up?
The move to revise the AI Act follows sustained lobbying from US tech giants. In
October, the Computer and Communications Industry Association (CCIA), whose
members include Apple, Meta, and Amazon, launched a campaign pushing for
simplification not only of the AI Act but of the EU’s entire digital rulebook.
Meanwhile, EU officials have reportedly engaged with the Trump administration on
these issues. ... The potential delay reflects pressure from national
authorities. Denmark and Germany have both pushed for a one-year extension. A
spokesperson from Germany’s Federal Ministry for Digital Transformation and
Government Modernization said that a delay “would allow sufficient time for the
practical application of European standards by AI providers, with standards
still currently being elaborated.” ... Another major reform under consideration
is expanding and centralizing oversight powers within the Commission’s AI
Office. Currently responsible for general-purpose AI models (GPAI), the office
would gain new authority to oversee all AI systems based on GPAI and conduct
conformity assessments for certain high-risk systems. The Commission would also
gain new authority to perform conformity assessments for certain high-risk
systems and supervise online services deemed to pose “systemic risk” under the
Digital Services Act. This would shift more power to Brussels and expand the
mandate of the Commission’s AI Office beyond its current role supervising GPAI.
BITS & BYTES : The Foundational Lens for Enterprise Transformation
BITS serves as high-level strategic governance—ensuring balanced maturity
assessments across business alignment, information-centric decision-making,
technology enablement, and security resilience—while leveraging BDAT’s detailed
sub-domains (layers and components) for tactical implementation and operational
oversight. This allows organizations to maintain BDAT’s precision in decomposing
complex IT landscapes (e.g., mapping specific data architectures or application
portfolios) within BITS’s overarching pillars, fostering adaptive governance
that scales from atomic “bits” of change to enterprise-wide transformations ...
If BITS defines what must be managed, BYTES (Balanced Yearly Transformation to
Enhance Services) define how change must be processed. BYTES is more than a set
of principles; it is a derivative of the core architectural lifecycle: Plan
(Balanced Yearly), Design& Build (Transformation Enhancing) , and Run
(Services). Each component of BYTES directly maps to the mandatory stages of a
continuous transformation framework, enabling architects to manage change at its
source. ... The BITS & BYTES framework is not intended to replace existing
architecture frameworks (e.g., TOGAF, Zachman, DAMA, IT4IT, SAFe). Instead, it
acts as a meta-framework—a simplified, high-level matrix that accommodates and
contextualizes the applicability of all existing models.
Unlocking GenAI and Cloud Effectiveness With Intelligent Archiving
Why 60% of BI Initiatives Fail (and How Enterprises Can Avoid It)
Many BI projects fail because goals and outcomes aren’t clearly defined. While enterprises may be confident that they understand BI gaps, often their goals are vague, lacking proper detailing and no internal consensus. ... Poor project management practices, vague processes, and changing responsibilities create even more confusion. In many failed BI projects, BI is viewed as “just another IT initiative,” whereas it should be treated as part of a business transformation program. Without active sponsorship and accountability, the technology may be delivered, but its adoption and impact suffer. ... Agile and iterative methods are often preferred since they are effective for BI. Whereas, the waterfall method is not recommended for BI projects since it lacks the necessary agility to adapt to changing requirements, iterative data exploration, and continuous business feedback. Under the waterfall approach, the users are engaged only in the beginning of the project and during the end, which leaves gaps for development or data analysis incase of issues. ... A system is only as good as the users who use it; research has shown that 55% of users lack confidence in BI tools due to insufficient training. Enterprises often expend considerable resources on deployment, but neglect enablement. If employees can’t find how to navigate dashboards, understand the data quality, data visualizations, or use insights to make daily decisions, the adoption rates suffer.Authentication in the age of AI spoofing
Unlike traditional malware, which may find its way into networks through a
compromised software update or downloads, AI-powered threats utilize machine
learning to analyze how employees authenticate themselves to access networks,
including when they log in, from which devices, typing patterns and even mouse
movements. The AI learns to mimic legitimate behavior while collecting login
credentials and is ultimately deployed to evade basic detection. ... Beyond the
statistics, AI’s effectiveness is driven by its exponentially improving
abilities to social engineer humans — replicating writing style, voice cadence,
facial expressions or speech with subtle nuance and adding realistic context by
scanning social media and other publicly available references. The data is
striking and reflects the crucial need for a multi-layer approach to help
sidestep the exponentially escalating ability for AI to trick humans. ...
Cryptographic protection complements biometric authentication, which verifies
“Is this the right person?” at the device level, while passkeys are used to
verify “Is this the right website or service?” at the network level. Multi-modal
biometrics, such as facial recognition plus fingerprint scanning or biometrics
plus behavioral patterns, further strengthen this approach. As AI-powered
attacks make credential theft and impersonation attacks more sophisticated, the
only sustainable line of defense is a form of authentication that cannot be
tricked or must be cryptographically verified.
No comments:
Post a Comment