Showing posts with label Scam. Show all posts
Showing posts with label Scam. Show all posts

Daily Tech Digest - November 19, 2025


Quote for the day:

"You are not a team because you work together. You are a team because you trust, respect and care for each other." -- Vala Afshar



How to automate the testing of AI agents

Experts view testing AI agents as a strategic risk management function that encompasses architecture, development, offline testing, and observability for online production agents. ... “Testing agentic AI is no longer QA, it is enterprise risk management, and leaders are building digital twins to stress test agents against messy realities: bad data, adversarial inputs, and edge cases,” says Srikumar Ramanathan ... “Agentic systems are non-deterministic and can’t be trusted with traditional QA alone; enterprises need tools that trace reasoning, evaluate judgment, test resilience, and ensure adaptability over time,” says Nikolaos Vasiloglou ... Part of the implementation strategy will require integrating feedback from production back into development and test environments. Although testing AI agents should be automated, QA engineers will need to develop workflows that include reviews from subject matter experts and feedback from other end users. “Hierarchical scenario-based testing, sandboxed environments, and integrated regression suites—built with cross-team collaboration—form the core approach for test strategy,” says Chris Li ... Mike Finley, says, “One key way to automate testing of agentic AI is to use verifiers, which are AI supervisor agents whose job is to watch the work of others and ensure that they fall in line. Beyond accuracy, they’re also looking for subtle things like tone and other cues. If we want these agents to do human work, we have to watch them like we would human workers.”


AI For Proactive Risk Governance In Today’s Uncertain Landscape

Emerging risks are no longer confined to familiar categories like credit or operational performance. Instead, leaders are contending with a complex web of financial, regulatory, technological and reputational pressures that are interconnected and fast-moving. This shift has made it harder for executives to anticipate vulnerabilities and act before risks escalate into real business impact. ... The sheer volume of evolving requirements can overwhelm compliance teams, increasing the risk of oversight gaps, missed deadlines or inconsistent reporting. For many organizations, the challenge is not simply keeping up but proving to regulators and stakeholders that governance practices are both proactive and defensible. ... As businesses evaluate their options to get ahead of risk, AI is top of the list. But not all AI is created equal, and paradoxically, some approaches may introduce added risk. General-purpose large language models can be powerful tools for information synthesis, but they are not designed to deliver the accuracy, transparency and auditability required for high-stakes enterprise decisions. Their probabilistic nature means outputs can at times be incomplete or inaccurate. ... Every AI output must be explainable, traceable and auditable. Executives need to understand the reasoning behind the recommendations they present to boards, regulators or shareholders. Defensible AI ensures that decisions can withstand scrutiny, fostering both compliance and trust between human and machine.


Navigating India's Data Landscape: Essential Compliance Requirements under the DPDP Act

The Digital Personal Data Protection Act, 2023 (DPDP Act) marks a pivotal shift in how digital personal data is managed in India, establishing a framework that simultaneously recognizes the individual's right to protect their personal data and the necessity for processing such data for lawful purposes. For any organization—defined broadly to include individuals, companies, firms, and the State—that determines the purpose and means of processing personal data (a "Data Fiduciary" or DF), compliance with the DPDP Act requires strict adherence to several core principles and newly defined rules. Compliance with the DPDP Act is like designing a secure building: it requires strong foundational principles, robust security systems, specific safety features for vulnerable occupants (Child Data rules), specialized certifications for large structures, and a clear plan for Data Erasure. Organizations must begin planning now, as the core operational rules governing notice, security, child data, and retention come into force eighteen months after the publication date of the DPDP Rules in November 2025. ... DFs must implement appropriate technical and organizational measures. These safeguards must include techniques like encryption, obfuscation, masking, or the use of virtual tokens, along with controlled access to computer resources and measures for continued processing in case of compromise, such as data backups.


Doomed enterprise AI projects usually lack vision

CIOs and other IT decision-makers are under pressure from boards and CEOs who want their companies to be “AI-first” operations; that runs the risk of moving too fast on execution and choosing the right projects, said Steven Dickens, principal analyst at Hyperframe Research. Smart leaders are cautious and pragmatic and focused on validated value, not jumping the gun on mission-critical processes. “They are ring-fencing pilot projects to low-risk, high-impact areas like internal code generation or customer service triage,” Dickens said. ... In this experimental period, organizations viewing AI as a way to reimagine business will take an early lead, Tara Balakrishnan, associate partner at McKinsey, said in the study. “While many see leading indicators from efficiency gains, focusing only on cost can limit AI’s impact,” Balakrishnan wrote. Scalability, project costs, and talent availability also play key roles in moving proof-of-concept projects to production. AI tools are not just plug and play, said Jinsook Han, chief strategy and agentic AI officer at Genpact. While companies can experiment with flashy demos and proofs of concept, the technology also needs to be usable and relevant, Han said. ... Many AI projects fail because they are built atop legacy IT systems, Han said, adding that modifying a company’s technology stack, workflows, and processes will maximize what AI can do. Humans also still need to oversee AI projects and outcomes — especially when agentic AI is involved, Han said. 


GenAI vs Agentic AI: From creation to action — What enterprises need to know

Generative AI and Agentic AI are two separate – but often interrelated – paradigms. Generative AI excels in authoring or creating content from prompts, while Agentic AI involves taking autonomous actions to achieve objectives in complex workflows that involve multiple steps. ... Agentic AI is the next step to advances in data science – from construction to self-execution. They act as intelligent digital workers capable of managing a vast array of complex multi-step workflows. In banking and financial services, Agentic AI enables autonomous function for trading and portfolio management. Given a strategic objective like “maximize return within an acceptable risk parameter,” it can perform autonomously by monitoring market signals, executing traders’ decisions by rebalancing assets and adjusting portfolios, all in real-time. ... The difference between Generative AI and Agentic AI is starting to fade. We are heading toward a future version of generative models being the “thinking engine” of agentic systems. It will not be Generative AI versus Agentic AI. Intelligent systems will reason, create and act across business ecosystems. For this to happen, there will be a need for interoperable systems and common standards. There are frameworks such as the Model Context Protocol (MCP) and metadata standards like AgentFacts already laying the groundwork for a transparent and plug-and-play agent ecosystems to provide trust, transparency, and safe collaboration for agents between platforms.


Pushing the thermal envelope

“When new data centers are designed today, instead of relying solely on the grid, they are integrating on-site power stations with their facilities. These on-site generators function like traditional power stations, and as heat engines, they produce substantial byproduct heat,” Hannah explains. This high-grade, abundant heat opens new possibilities. Technologies such as absorption chillers, historically underutilized in data centers due to insufficient heat, can now be deployed effectively when coupled with BYOP systems. This flexibility extends to operational optimization as well. ... The digital twin methodology allows engineers to create theoretical models of systems to simulate responses and tune control algorithms accordingly. Operational or production-based digital twins extend this approach by using field and system data to continuously improve model accuracy over time. ... The thermal chain and power train now operate less as separate systems and more as partners in a shared ecosystem, each dependent on the other for optimal performance. This growing synergy extends beyond technology, driving closer collaboration between traditionally separate teams across design, engineering, manufacturing, and operations. “The growth is so incredible that customers are looking for products and systems they can deploy quickly – solutions that are easy to install, reliable, densified, cost-effective, and efficient,” says Hannah. “Right now, speed of deployment is the priority.”


Cloud Services Face Scrutiny Under the Digital Markets Act

Today, European authorities announced three new market investigations into cloud-computing services under the Digital Markets Act (DMA), as EU leaders gather in Berlin for the Summit on European Digital Sovereignty — an event billed as a push for an “independent, secure and innovation-friendly digital future for Europe.” Two investigations will assess whether Amazon Web Services (AWS) and Microsoft’s Azure should be designated as gatekeepers, despite apparently “not meeting the DMA gatekeeper thresholds for size, user number and market position.” A third investigation is to assess if the DMA is best placed to “effectively tackle practices that may limit competitiveness and fairness in the cloud computing sector in the EU.” ... Europe is increasingly concerned about data security and sovereignty, spurred in part by the Trump administration’s ongoing hostility to the EU and the powers granted by the CLOUD Act (Clarifying Lawful Overseas Use of Data Act), which allows US law enforcement to obtain data stored abroad, even data concerning non-US citizens. Fears of a potential “kill switch” have pushed digital sovereignty up the EU agenda, with some member states switching away from the biggest cloud providers and adopting European alternatives. However, to switch away from US providers at scale may require competition law enforcement and regulation. The European Commission has passed the Data Act, which requires cloud providers to eliminate switching charges by 2027 and bans “technical, contractual and organisational obstacles’ to switching to another provider.” 


IBM readies commercially valuable quantum computer technology

According to Chong, Loon puts a separate layer on the chip, going three-dimensional, allowing connections between qubits that aren’t immediate neighbors. Even separate chips, the ones contained in the boxes at the base of those giant cryogenic chandelier-shaped refrigerators, can be linked together, says IBM’s Crowder. In fact, that’s already possible with Nighthawk. “You can think of it as wires going between the boxes at the bottom,” Crowder says. “Nighthawk is designed to be able to do that, and it’ll also be used to connect the fault-tolerant modules in the large-scale fault-tolerant system as well.” “That is a big announcement for the industry,” says IDC analyst Heather West. “Now we’re seeing ways to actually begin scaling these systems without squeezing thousands or hundreds of thousands of qubits on a chip.” It’s a misperception that quantum computing isn’t beneficial and can’t be used today. Organizations should already be thinking about how they will use quantum computing, especially if they expect to be able to get a competitive edge from it, West says. “Waiting until the technology advances further could be detrimental because the learning curve that you need to be able to understand quantum and to program quantum algorithms is quite high,” West says. It’s difficult to develop these skills internally, and difficult to bring them into an organization. And then there’s the time it takes to develop use cases and figure out new workflows.


Why modular AI is emerging as the next enterprise architecture standard

LLMs are remarkable, but they are not inherently aligned with enterprise control frameworks. Without a way to govern the reasoning and retrieval pathways, organizations place themselves at risk of unpredictable outputs — and unpredictable headlines. ... The modular approach I explored is built on two ideas: small language models and retrieval-augmented generation. SLMs focus on specific domains rather than being trained to handle everything. Because they are compact and specialized, they can run on more common infrastructure and offer predictable performance. Instead of forcing one model to understand every topic in the enterprise, SLMs stay close to the context they are responsible for. ... Together, SLMs and RAG form a system where intelligence is both efficient and explainable. The model contributes language understanding, while retrieval ensures accuracy and alignment with business rules. It’s an approach that favors control and clarity over brute-force scale — exactly what large organizations need when AI decisions must be defended, not just delivered. ... At the heart of this approach is what I call a semantic layer: a coordination surface where AI agents reason only over the business context and data sources assigned to them. This layer defines three critical elements: What information an agent can access; How its decisions are validated; and When it should escalate or defer to humans. In this design, smaller language models are used where focus matters more than size. 


The long conversations that reveal how scammers work

The slow cadence is what scammers use to build trust. The study shows how predictable that progression is when viewed at scale. Early messages tend to focus on small talk, harmless questions, light personal details, and daily routines. These early exchanges often contain subtle checks to see if the target is human. Some scammers ask directly. “By the way, there are a lot of fake people here, are you a real person” is one of the lines captured in the study. ... That distance between the greeting and the attempted cash out is the core challenge in studying long game fraud. Scammers send photos of meals or walks, talk about family, and bring up current events to lay the groundwork for later requests. Scammers often sent images, while audio and video were less common, but when used, they tended to appear at moments when scammers wanted to strengthen the sense of presence. The researchers found that 20 percent of conversations included selfie requests, and more than half of those requests took place on WhatsApp. ... Long haul scams do not rely on high urgency. They rely on comfort, familiarity, and patience. This is a different challenge than technical support scams or prize scams. Defenders need to detect slow moving risk signals before money leaves accounts. The study also shows the scale challenge. Manual research that covers weeks of dialog is difficult to sustain. The researchers address this by blending an LLM with a workflow that pulls in human reviewers at key points. 

Daily Tech Digest - February 08, 2024

The do-it-yourself approach to MDM

If you’re comfortable taking on extra responsibilities and costs, the next big question is whether you can get the right tool — or more often, many tools — you need. This is where you need a detailed understanding of the mobile platforms you have to manage and every platform that needs to integrate with them for everything to work. MDM isn’t an island. It integrates with a sometimes staggering number of enterprise components. Some, like identity management, are obvious; others like log management or incident response are less obvious when you think about successful mobility management. Then there are the external platforms that need connections. Think identity management — Entra, Workspace, Okta — and things like Apple Business Manager that you need to work well in both every day and unusual situations. Then tack on the network, security, auditing, load balancing, inventory, the help desk and various other services. You’re going to need something to connect with everything you already have, or you could find yourself saddled with multiple migrations. 


NCSC warns CNI operators over ‘living-off-the-land’ attacks

The NCSC said that even organisations with the most mature cyber security techniques could easily fail to spot a living-off-the-land attack, and assessed it is “likely” that such activity poses a clear threat to CNI in the UK. ... In particular, it warned, both Chinese and Russian hackers have been observed living-off-the-land on compromised CNI networks – one prominent exponent of the technique is the GRU-sponsored advanced persistent threat (APT) actor known as Sandworm, which uses LOLbins extensively to attack targets in Ukraine. “It is vital that operators of UK critical infrastructure heed this warning about cyber attackers using sophisticated techniques to hide on victims’ systems,” said NCSC operations director Paul Chichester. “Threat actors left to carry out their operations undetected present a persistent and potentially very serious threat to the provision of essential services. Organisations should apply the protections set out in the latest guidance to help hunt down and mitigate any malicious activity found on their networks.” "In this new dangerous and volatile world where the frontline is increasingly online, we must protect and future proof our systems,” added deputy prime minister Oliver Dowden.


What Are the Core Principles of Good API Design?

Your API should also be idiomatic to the programming language it is written against and respect the way that language works. For example, if the API is to be used with Java, use exceptions for errors, rather than returning an error code as you might in C. APIs should follow the principle of least surprise. Part of the way this can be achieved is through symmetry; if you have to add and remove methods, these should be applied everywhere they are appropriate. A good API comprises a small number of concepts; if I’m learning it, I shouldn’t have to learn too many things. This doesn’t necessarily apply to the number of methods, classes or parameters, but rather the conceptual surface area that the API covers. Ideally, an API should only set out to achieve one thing. It is also best to avoid adding anything for the sake of it. “When in doubt, leave it out,” as Bloch puts it. You can usually add something to an API if it turns out to be needed, but you can never remove things once an API is public. As noted earlier, your API will need to evolve over time, so a key part of the design is to be able to make changes further down the line without destroying everything.


Russian Ransomware Gang ALPHV/BlackCat Resurfaces with 300GB of Stolen US Military Documents

The ALPHV/BlackCat ransomware group has threatened to publish and sell 300 GB of stolen military documents unless Technica Corporation gets in touch. “If Technica does not contact us soon, the data will either be sold or made public,” the ransomware gang threatened. However, there is no guarantee that the ransomware gang would not pass the military documents to adversaries even after the military contractor pays the ransom. The BlackCat ransomware gang also posted screenshots of the leaked military documents as proof, displaying the victims’ names, social security numbers, job roles and locations, and clearance levels. Other military documents include corporate information such as billing invoices and contracts for private companies and federal agencies such as the FBI and the US Air Force. So far, the motive of the cyber attack remains unknown, but it’s common for threat actors to feign financial motives to conceal their true geopolitical objectives. While the leaked military documents may not classified, they still contain crucial personal information that state-linked threat actors could use for targeting.


6 best practices for better vendor management

To build a stronger relationship with vendors, “CIOs should bring them into the fold regarding their priorities and potential concerns about what may —or may not — lie ahead, from a regulatory perspective or the general economic climate, for example,” says Kevin Beasley, CIO at VAI, a midmarket ERP software developer. “A few years ago, supply-chain snags had CIOs looking for new technology,” Beasley says. “Lately, a talent shortage means CIOs are pushing for more automation. CIOs that don’t delay posing questions about how vendor products can solve such challenges, but also take the time to hear the information, will build a valuable rapport that can benefit both parties.” Part of building a collaborative partnership is staying in close contact. It’s important to establish clear communication channels and schedule regular check-ins with active vendors, “to understand performance, expectations, and progress while recognizing that no process or service goes perfectly all the time,” says Patrick Gilgour, managing director of the Technology Strategy and Advisory practice at consulting firm Protiviti.


Three commitments of the data center industry for 2024

To become more authentic and credible in these reputation-building dialogues and go beyond the data center, we must be more representative of the people our infrastructure ultimately serves. Although progress has been made, we must keep evolving. We need diversity of background, experience, ethnicity, age, and outlook in order to fully embrace the challenges of digital infrastructure. The range of roles, skillsets, and opportunities in the sector is far wider than many outside the industry recognize. Creating organizations where every person can be themselves, and deliver in line with their ethics, values, and beliefs is a prerequisite for building a positive reputation. And of course, the more attractive an industry we become, the more great candidates, partners, and supporters we’ll attract. ... Speaking of inspiring the next generation, 2024 can be the year in which we embrace youth. How do we attract more young people into the industry? By inspiring them. The data center sector is a dynamic, exciting, and rapidly growing sector. We want to ensure this is being effectively articulated in print, across social media, and online.


Is your cloud security strategy ready for LLMs?

When employees and contractors use those public models, especially for analysis, they will be feeding those models internal data. The public models then learn from that data and may leak those sensitive corporate secrets to a rival who asks a similar question. “Mitigating the risk of unauthorized use of LLMs, especially inadvertent or intentional input of proprietary, confidential, or material non-public data into LLMs” is tricky, says George Chedzhemov, BigID’s cybersecurity strategist. Cloud security platforms can help, he adds, especially for access controls and user authentication, encryption of sensitive data, data loss prevention, and network security. Other tools are available for data discovery and surfacing sensitive information in structured, unstructured, and semi-structured repositories. “ It is impossible to protect data that the organization has lost track of, data that has been over-permissioned, or data that the organization is not even aware exists, so data discovery should be the first step in any data risk remediation strategy, including one that attempts to address AI/LLM risks,” says Chedzhemov.


Shadow AI poses new generation of threats to enterprise IT

Functional risks stem from an AI tool's ability to function properly. For example, model drift is a functional risk. It occurs when the AI model falls out of alignment with the problem space it was trained to address, rendering it useless and potentially misleading. Model drift might happen because of changes in the technical environment or outdated training data. ... Operational risks endanger the company's ability to do business. Operational risks come in many forms. For example, a shadow AI tool could give bad advice to the business because it is suffering from model drift, was inadequately trained or is hallucinating -- i.e., generating false information. Following bad advice from GenAI can result in wasted investments -- for example, if the business expands unwisely -- and higher opportunity costs -- for example, if it fails to invest where it should. ... Legal risks follow functional and operational risks if shadow AI exposes the company to lawsuits or fines. Say the model advises leadership on business strategy. But the information is incorrect, and the company wastes a huge amount of money doing the wrong thing. Shareholders might sue.


Creating a Data Quality Framework

A start-up business may not initially have a need for organizing massive amounts of data (it doesn’t yet have massive amounts of data to organize), but a master data management (MDM) program at the start can be remarkably useful. Master data is the critical information needed for doing business accurately and efficiently. For example, the business’s master data contains, among other things, the correct addresses of the start-up’s new customers. Master data must be accurate to be useful – the use of inaccurate master data would be self-destructive. If the organization is doing business internationally, it may need to invest in a Data Governance (DG) program to deal with international laws and regulations. Additionally, a Data Governance program will manage the availability, integrity, and security of the business’s data. An effective DG program ensures that data is consistent and trustworthy and doesn’t get misused. A well-designed DG program includes not only useful software, but policies and procedures for humans handling the organization’s data. A Data Quality framework is normally developed and used when an organization has begun using data in complicated ways for research purposes. 


Meta Is Being Urged to Crack Down on UK Payment Scams

Since social media market platforms such as Facebook Marketplace do not have dedicated payment portals that accept payment cards, Davis said, standard security practices adopted by card issuers cannot be used to protect customers. As a result, preventing fraud on social media platforms is a challenge, he said. "To tackle this, we need greater action from Meta to stop fraudulent ads from being put in front of the U.K. consumers," Davis said. Meta Public Policy Mead Philip Milton, who testified before the committee, said his company takes fraud prevention "extremely seriously." Milton said Meta has adopted such measures as verifying ads on its platforms and permitting only financial ads that have cleared the U.K. Financial Services Verification process rolled out by the British Financial Conduct Authority. "A good indicator of fraud is fake accounts, as scammers generally tend to use fake accounts to carry out scams. As fraud prevention, Meta removed 827 million fake accounts in the third quarter of 2023," Milton said. Microsoft Government Affairs Director Simon Staffell said the computing giant pursues criminal infrastructure disruption as one of its fraud prevention strategies.



Quote for the day:

"If you are willing to do more than you are paid to do, eventually you will be paid to do more than you do." -- Anonymous