Quote for the day:
“It is never too late to be what you might have been.” -- George Eliot
Six questions to ask when crafting an AI enablement plan
As we near the end of 2025, there are two inconvenient truths about AI that
every CISO needs to take into their heart. Truth #1: Every employee who can is
using generative AI tools for their job. Even when your company doesn’t provide
an account for them, even when your policy forbids it, even when the employee
has to pay out of pocket. Truth #2: Every employee who uses generative AI will
(or likely has already) provided this AI with internal and confidential company
information. ... In the case of AI, this refers to the difference between the
approved business apps that are trusted to access company data and the growing
number of untrusted and unmanaged apps that have access to that data without the
knowledge of IT or security teams. Essentially, employees are using unmonitored
devices, which can hold any number of unknown AI apps, and each of those apps
can introduce a whole lot of risk to sensitive corporate data. ... Simply put,
organizations cannot afford to wait any longer to get a handle on AI governance.
... So now, the job is to craft an AI enablement plan that promotes productive
use and throttles reckless behaviors. ... Think back to the mid‑2000s, when SaaS
crept into the enterprise through expense reports and project trackers. IT tried
to blacklist unvetted domains, finance balked at credit‑card sprawl, and legal
wondered whether customer data belonged on “someone else’s computer.”
Eventually, we accepted that the workplace had evolved, and SaaS became
essential to modern business.Why most enterprise AI coding pilots underperform (Hint: It's not the model)
When organizations introduce agentic tools without addressing workflow and
environment, productivity can decline. A randomized control study this year
showed that developers who used AI assistance in unchanged workflows completed
tasks more slowly, largely due to verification, rework and confusion around
intent. The lesson is straightforward: Autonomy without orchestration rarely
yields efficiency. ... Security and governance, too, demand a shift in
mindset. AI-generated code introduces new forms of risk: Unvetted
dependencies, subtle license violations and undocumented modules that escape
peer review. Mature teams are beginning to integrate agentic activity directly
into their CI/CD pipelines, treating agents as autonomous contributors whose
work must pass the same static analysis, audit logging and approval gates as
any human developer. GitHub’s own documentation highlights this trajectory,
positioning Copilot Agents not as replacements for engineers but as
orchestrated participants in secure, reviewable workflows. ... Under the hood,
agentic coding is less a tooling problem than a data problem. Every context
snapshot, test iteration and code revision becomes a form of structured data
that must be stored, indexed and reused. As these agents proliferate,
enterprises will find themselves managing an entirely new data layer: One that
captures not just what was built, but how it was reasoned about. Enabling small language models to solve complex reasoning tasks
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory
(CSAIL) developed a collaborative approach where an LLM does the planning,
then divvies up the legwork of that strategy among smaller ones. Their method
helps small LMs provide more accurate responses than leading LLMs like
OpenAI’s GPT-4o, and approach the precision of top reasoning systems such as
o1, while being more efficient than both.
Their framework, called “Distributional Constraints by Inference Programming
with Language Models” (or “DisCIPL”), has a large model steer smaller
“follower” models toward precise responses when writing things like text
blurbs, grocery lists with budgets, and travel itineraries. ... You may think
that larger-scale LMs are “better” at complex prompts than smaller ones when
it comes to accuracy and efficiency. DisCIPL suggests a surprising
counterpoint for these tasks: If you can combine the strengths of smaller
models instead, you may just see an efficiency bump with similar results. The
researchers note that, in theory, you can plug in dozens of LMs to work
together in the DisCIPL framework, regardless of size. In writing and
reasoning experiments, they went with GPT-4o as their “planner LM,” which is
one of the models that helps ChatGPT generate responses. Key trends accelerating Industrial Secure Remote Access (ISRA) Adoption
As essential maintenance and diagnostic activities continue to shift toward remote and digital execution, they become exposed to cyber risks that were not present when plants, fleets, and factories operated as isolated, closed systems. Compounding the challenge, many industrial organizations still lack the expertise and skill sets to select and operate the proper technologies that establish secure remote connections efficiently and securely. This, unfortunately, results in operational delays and slower response in critical or emergency situations. Industrial Cyber emphasizes that controlled, identity-bound, and fully auditable access to critical tasks is key to ensuring secure remote access functions as an operational and business enabler—without introducing new pathways for malicious actors. ... Compounding the risk, OT environments frequently rely on legacy hardware that lacks modern encryption capabilities, leaving these connections especially vulnerable. By centralizing access governance, securely managing vendor credentials, streamlining access-request workflows, and maintaining consistent audit trails, industrial organizations can regain control over third-party access. ... Industrial Cyber recognizes two solutions from SSH. 1) PrivX OT is purpose-built for industrial environments. The solution provides passwordless, keyless, and just-in-time industrial secure remote access using short-lived certificates and micro-segmentation to reduce risk. 2) NQX delivers quantum-safe, high-speed network encryption for site-to-site connectivity.Navigating AI Liability: What Businesses That Utilize AI Need to Know
Cybercriminals can now use generative AI to create extremely convincing
deepfakes. These deepfakes can then be used for corporate espionage, identity
theft and phishing scams. AI software may end up automatically aggregating and
analyzing huge amounts of data from multiple sources. This can increase
privacy invasion risks when comprehensive profiles of people are compiled
without their awareness or consent. AI systems which experience glitches or
malfunctions, let others have unauthorized access to them, or lack robust
security could lead to sensitive data being exposed. ... It is risky for your
business to publish AI-generated content because AI models are trained on vast
amounts of copyrighted material. The models thus end up not always creating
original material, and sometimes create material which is identical to or
extremely similar to copyrighted content. “It was the AI’s fault” will not be
a valid argument in court if this happens to your business. Ignorance is not a
defense in a copyright infringement claim. ... Content that is fully
generated by AI has no copyright protection. AI-generated content that is
significantly edited by humans may receive copyright protection, but the
situation is murky. Original content that is created by humans and is then
slightly edited or optimized by AI will usually receive full copyright
protection. A lot of businesses now document the process of content creation
to prove that humans created the content and preserve copyright protection.When the Cloud Comes Home: What DBAs Need to Know About Cloud Repatriation
One of the main drivers for cloud repatriation is cost. Early cloud migrations were often justified by projected savings because there would be no more hardware to maintain. Furthermore, the cloud promised flexible scaling and pay-as-you-go pricing. Nevertheless, for many enterprises, those savings have proven elusive. Data-intensive workloads, in particular, can rack up significant cloud bills. Every I/O operation, network transfer, and storage request adds up. When workloads are steady and predictable, the cloud’s on-demand elasticity can actually become more expensive than on-prem capacity. DBAs, who often have a front-row seat to performance and utilization metrics, can play a crucial role in identifying when cloud costs are out of alignment with business value. ... In highly regulated industries, compliance concerns are another driver. Regulations such as HIPAA, PCI-DSS, GDPR and more, require your applications and the data they access to be secure and controlled. Organizations may find that managing sensitive data in the cloud introduces risk, especially when data residency, auditability, or encryption requirements evolve. Repatriating workloads can restore a sense of control and predictability—key traits valued by DBAs. ... Today’s computing needs demand an IT architecture that embraces the cloud, but also on premises workloads, including the mainframe. Remember, data gravity attracts applications to where the data resides.SaaS price hikes put CIOs’ budgets in a bind
Subscription prices from major SaaS vendors have risen sharply in recent months,
putting many CIOs in a bind as they struggle to stay within their IT budgets.
... While inflation may have driven some cost increases in past months, rates
have since stabilized, meaning there are other factors at play, Tucciarone says.
Vendors are justifying subscription price hikes with frequent product
repackaging schemes, consumption-based subscription models, regional pricing
adjustments, and evolving generative AI offerings, he adds. “Vendors are
rationalizing this as the cost of innovation and gen AI development,” he says.
... SaaS data platforms fall into a similar category as other mission-critical
applications, Aymé adds, because the cost of moving an organization’s data can
be prohibitively expensive, in addition to the price of a new SaaS
tool. Kunal Agarwal, CEO and cofounder of data observability platform
Unravel Data, also pointed to price increases for data-related SaaS tools. Data
infrastructure costs, including cloud data warehouses, lakehouses, and analytics
platforms, have risen 30% to 50% in the past year, he says. Several factors are
driving cost increases, including the proliferation of computing-intensive gen
AI workloads and a lack of visibility into organizational consumption, he adds.
“Unlike traditional SaaS, where you’re paying for seats, these platforms bill
based on consumption, making costs highly variable and difficult to predict,”
Agarwal says.
How to simplify enterprise cybersecurity through effective identity management
“It is challenging for a lot of organizations to get a complete picture of what
their assets are and what controls apply to those assets,” Persaud says. He
explains that Deloitte’s identity solution assisted the customer in connecting
users with the assets they utilized. As they discovered these assets, they were
able to fine-tune the security controls that were applied to each in a more
refined fashion. “If the system is going to [process] financial data and other
private information, we need to put the right controls in place on the identity
side,” he says. “We’ve been able to bring those two pieces together by
correlating discovery of assets with discovery of identity and lining that up
with controls from the IT asset management system.” ... “If you think from a
broader risk management perspective, this has been fundamental to our security
model,” he says. The ability to simply track the locations of employees and
assign risk accordingly is a significant advancement in risk monitoring for a
company growing its international presence. The company looks out for instances
of impossible travel, such as if an employee has entered the system in one
location and then in another at a distant location that they could not have
possibly reached during a specified period, an alert is raised. Security
analysts also use the software to scan for risky sign-ins. If a user logs in
from an IP that has been blacklisted, an alert is raised. They have increasingly
relied on conditional access policies that rely on monitoring user
behavior.
When an AI Agent Says ‘I Agree,’ Who’s Consenting?
The most autonomous agents can execute a chain of actions related to a
transaction—such as comparing, booking, paying, forwarding the invoice. The
broader the autonomy, the tighter the frame: precise contractual rules,
allow-lists, budgets, a kill-switch, clear user notices, and, where required,
electronic signatures. At this point the question stops being technical and
becomes legal: under what framework does each agent-made click have effect, on
whose authority, and with what safeguards? European law and national laws
already offer solid anchors—agency and online contracting, signatures and secure
payments, fair disclosure—now joined by the newer eIDAS 2 and the AI Act. ...
Under European law, an AI agent has no will of its own. It is a means of
expressing—or failing to express—someone’s will. Legally, someone always
consents: the user (consumer) or a representative in the civil law sense. If an
agent “accepts” an offer, we are back to agency: the act binds the principal
only within the authority granted; beyond that, it is unenforceable. The agent
is not a new subject of law. ... Who is on the hook if consent is tainted?
First, the business that designs the onboarding. Europe’s Digital Services Act
(DSA) bans deceptive interfaces (“dark patterns”) that materially impair a
user’s ability to make a free, informed choice. A pushy interface can support a
finding of civil fraud and a regulatory breach. Second, the principal is bound
only within the mandate.
No comments:
Post a Comment