Showing posts with label artificialintelligencee. Show all posts
Showing posts with label artificialintelligencee. Show all posts

Daily Tech Digest - September 18, 2025


Quote for the day:

"When your life flashes before your eyes, make sure you’ve got plenty to watch.” -- Anonymous


The new IT operating model: cloud-managed networking as a strategic lever

Enterprises are navigating an environment where the complexity of IT is increasing exponentially. Hybrid work requires consistent connectivity across homes, offices, and campuses. Edge computing and IoT generate massive volumes of data at distributed sites. Security risks escalate as the attack surface grows. Traditional, hardware-centric approaches leave IT teams struggling to keep up. Managing dozens or hundreds of controllers, patching firmware manually, and troubleshooting issues site by site is not sustainable. Cloud-managed networking changes that equation. By centralizing management, applying AI-driven intelligence, and extending visibility across distributed environments, it enables IT to shift from reactive firefighting to proactive strategy. ... Enterprises adopting cloud-managed networking are making a decisive shift from complexity to clarity. Success requires more than technology alone. It demands a partner that understands how to translate advanced capabilities into measurable business outcomes. ... Cloud-managed networking is not just another IT trend. It is the operating model that will define enterprise technology for the next decade. By elevating the network from infrastructure to strategy, it enables organizations to move faster, stay secure, and innovate with confidence.


Why Shadow AI Is the Next Big Governance Challenge for CISOs

In many respects, shadow AI is a subset of a broader shadow IT problem. Shadow IT is an issue that emerged more than a decade ago, largely emanating from employee use of unauthorized cloud apps, including SaaS. Lohrmann noted that cloud access security broker (CASB) solutions were developed to deal with the shadow IT issue. These tools are designed to provide organizations with full visibility of what employees are doing on the network and on protected devices, while only allowing access to authorized instances. However, shadow AI presents distinct challenges that CASB tools are unable to adequately address. “Organizations still need to address other questions related to licensing, application sprawl, security and privacy policies, procedures and more ..,” Lohrmann noted. A key difference between IT and AI is the nature of data, the speed of adoption and the complexity of the underlying technology. In addition, AI is often integrated into existing IT systems, including cloud applications, making these tools more difficult to identify. Chuvakin added, “With shadow IT, unauthorized tools often leave recognizable traces – unapproved applications on devices, unusual network traffic or access attempts to restricted services. Shadow AI interactions, however, often occur entirely within a web browser or personal device, blending seamlessly with regular online activity or not leaving any trace on any corporate system at all.”


Cisco strengthens integrated IT/OT network and security controls

Melding IT and OT networking and security is not a new idea, but it’s one that has seen growing attention from Cisco. ... Cisco also added a new technology called AI-powered asset clustering to its Cyber Vision OT management suite. Cyber Vison keeps track of devices connected to an industrial network, builds a real-time map of how these devices talk to each other and to IT systems, and can detect abnormal behavior, vulnerabilities, or policy violations that could signal malware, misconfigurations, or insider threats, Cisco says. ... Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along with other Cisco management offerings such as ThousandEyes, which gives customers a shared inventory of assets, traffic flows and security. “What we are focusing on is helping our customers have the secure networking foundation and architecture that lets IT teams and operational teams kind of have one fabric, one architecture, that goes from the carpeted spaces all the way to the far reaches of their OT network,” Butaney said.


Global hiring risks: What you need to know about identity fraud and screening trends

Most organizations globally include criminal record checks in their pre-employment screening. Employment and education verifications are also common, especially in EMEA and APAC. ... “Employers that fail to strengthen their identity verification processes or overlook recurring discrepancy patterns could face costly consequences, from compliance failures to reputational harm,” said Euan Menzies, President and CEO of HireRight. ... More than three-quarters of businesses globally found at least one discrepancy in a candidate’s background over the past year. Thirteen percent reported finding one discrepancy for every five candidates screened. Employment verification remains the area where most inconsistencies are discovered, especially in APAC and EMEA. These discrepancies range from minor errors like incorrect dates to more serious issues such as fabricated job histories. ... Companies are increasingly adopting post-hire screening to address risks that emerge after someone is hired. In North America, only 38 percent of companies now say they do no post-hire screening, a sharp drop from 57 percent last year. Common post-hire checks include driver monitoring and periodic rescreening for regulated roles. These efforts help companies catch new issues such as undisclosed criminal activity, changes in legal eligibility to work, or evolving insider threats.


Doomprompting: Endless tinkering with AI outputs can cripple IT results

Some LLMs appear to be designed to encourage long-lasting conversation loops, with answers often spurring another prompt. ... “When an individual engineer is prompting an AI, they get a pretty good response pretty quick,” he says. “It gets in your head, ‘That’s pretty good; surely, I could get to perfect.’ And you get to the point where it’s the classic sunk-cost fallacy, where the engineer is like, ‘I’ve spent all this time prompting, surely I can prompt myself out of this hole.’” The problem often happens when the project lacks definitions of what a good result looks like, he adds. “Employees who don’t really understand the goal they’re after will spin in circles not knowing when they should just call it done or step away,” Farmer says. “The enemy of good is perfect, and LLMs make us feel like if we just tweak that last prompt a little bit, we’ll get there.” ... Govindarajan has seen some IT teams get stuck in “doom loops” as they add more and more instructions to agents to refine the outputs. As organizations deploy multiple agents, constant tinkering with outputs can slow down deployments and burn through staff time, he says. “The whole idea of doomprompting is basically putting that instruction down and hoping that it works as you set more and more instructions, some of them contradicting with each other,” he adds. “It comes at the sacrifice of system intelligence.”


Vanishing Public Record Makes Enterprise Data a Strategic Asset

“We are rapidly running out of public data that is credible and usable. More and more enterprises will start to assign value to their data and go beyond partnerships to monetize it. For example, wind measurements captured by a wind turbine company could be helpful to many businesses that are not competitors,” said Olga Kupriyanova, principal consultant of AI and data engineering at ISG. ... "We’re entering a defining moment in AI where access to reliable, scalable, and ethical data is quickly becoming the central bottleneck, and also the most valuable asset. As legal and regulatory pressure tightens access to public data, due to copyright lawsuits, privacy concerns, or manipulation of open data repositories, enterprises are being forced to rethink where their AI advantage will come from,” said Farshid Sabet, CEO and co-founder at Corvic AI, developer of a GenAI management platform. ... The economic consequences of such data loss are already visible. Analysts estimate that U.S. public data underpinned nearly $750 billion of business activity as recently as 2022, according to the Department of Commerce. The loss of such data blinds companies that build models for everything from supply chain forecasting to investment strategy and predictions.


The Architecture of Responsible AI: Balancing Innovation and Accountability

The field of AI governance suffers from what Mackenzie et al reaffirm as the “principal-agent problem,” where one party (the principal) delegates tasks to another party (the agent). But their interests are not perfectly aligned, leading to potential conflicts and inefficiencies. ... Architects occupy a unique position in this landscape. Unlike regulators who may impose constraints post-design, architects work at the intersection of possibility and constraint. They must balance competing requirements, such as performance and privacy, efficiency and equity, speed and safety, within coherent system designs. Every architectural decision must embed values, priorities, and assumptions about how systems should behave. ... current AI guidance suffers from systematic weaknesses: evidence quality is sacrificed for speed, commercial interests masquerade as objective advice, and some perspectives dominate while broader stakeholder voices remain unheard ... Architects, being well-placed to bridge the gap between strategy and technology, hold a key role in establishing the principles that govern how systems behave, interact, and evolve. In the context of AI, this principle set extends beyond technical design. It encompasses the ethical, social, and legal aspects as well. .


AI will make workers ‘busier in the future’ – so what’s the point exactly?

“I have to admit that I’m afraid to say that we are going to be busier in the future than now,” he told host Liz Claman. “And the reason for that is because a lot of different things that take a long time to do are now faster to do. I’m always waiting for work to get done because I’ve got more ideas.” ... “The more productive we are, the more opportunity we get to pursue new ideas,” Huang continued. Reading between the lines here, it seems the so-called efficiency gains afforded by AI will mean workers have more work dumped in their laps – onto the next task, no rest for the wicked, etc. Huang’s comments run counter to the prevailing sentiment among big tech executives on exactly what AI will deliver for both enterprises and individual workers. ... We’ve all read the marketing copy and heard it regurgitated by tech leaders on podcasts and keynote stages – AI will allow us to focus on the “more rewarding” aspects of our jobs. They’ve never fully explained what this entails, or how it will pan out in the workplace. To be quite honest, I don’t think they know what it means. Marketing probably made it up and they’ve stuck with it. ... Will we be busier spending time on those rewarding aspects of our jobs? I have to say, I’m doubtful. The reality is that workers will be pulled into other tasks and merely end up drowning in the same cumbersome workloads they’ve been dealing with since the pandemic.


Building Safer Digital Experiences Through Robust Testing Practices

Secure software testing forms the bedrock of resilient applications, proactively uncovering flaws before they become critical. Early testing practices can significantly reduce risks, costs, and exposure to threats. According to Global Market Insights, the growing number and size of data breaches have increased the need for security testing services. Organizations that heavily use security AI and automation save an average of USD 1.76 million compared to those that don’t. About 51% plan to increase their security spending. Early integration of techniques like Static Application Security Testing (SAST) can detect vulnerabilities in existing code. It can also help to fix bugs during development. ... Organizations must verify that their systems handle personal data securely and comply with global regulations like GDPR and CCPA. Testing ensures sensitive information is protected from leaks or unauthorized use. Americans are highly concerned about how companies use their private data. ... Stress testing evaluates how applications perform under extreme loads. It helps identify potential failures in scalability, response times, and resource management. Vulnerability assessments concentrate on uncovering security gaps. Verified Market Reports notes that, after recent financial crises, governments are putting stronger emphasis on stress testing.


Prompt Engineering Is Dead – Long Live PromptOps

PromptOps is gaining traction rapidly because it has the potential to address major challenges in the use of LLMs, such as prompt drift and suboptimal output. Yet incorporating PromptOps effectively into an organization is far from simple, requiring a structured and clear process, the right tools, and a mindset that enables collaboration and effective centralization. Digging deeper into what PromptOps is, why it is needed, and how it can be implemented effectively can help companies to find the right approach when incorporating this methodology for improving their LLM applications usage. ... Before PromptOps is implemented, an organization typically has prompts scattered across multiple teams and tools, with no structured management in place. The first stage of implementing PromptOps involves gathering every detail on LLM applications usage within an organization. It is essential to understand precisely which prompts are being used, by which teams, and with which models. The next stage is to build consistency into this practice by incorporating versioning and testing. Adding secure access control at this stage is also important, in order to ensure only those who need it have access to prompts. With these practices in place, organizations will be well-positioned to introduce cross-model design and embed core compliance and security practices into all prompt crafting. 

Daily Tech Digest - February 24, 2025


Quote for the day:

"A tough hide with a tender heart is a goal that all leaders must have." -- Wayde Goodall


A smarter approach to training AI models

AI models are beginning to hit the limits of compute. Model size is far outpacing Moore’s Law and the advances in AI training chips. Training runs for large models can cost tens of millions of dollars due to the cost of chips. This issue has been acknowledged by prominent AI engineers including Ilya Sutskever. The costs have become so high that Anthropic has estimated that it could cost as much to update Claude as it did to develop it in the first place. Companies like Amazon are spending billions to erect new AI data centers in an effort to keep up with the demands of building new frontier models. ... With a better foundational understanding of how AI works, we can approach AI model training and deployment in new ways that require a fraction of the energy and compute, bringing the rigor of other sciences to AI with a principles-first approach. ... By eschewing the inefficiencies and less theoretically justified parts of deep learning, we create a path forward to the next generation of truly intelligent AI, that we’ve seen surpasses the wall deep learning has hit. We have to understand how learning works and build models with interpretability and efficiency in mind from the ground up, especially as high-risk applications of AI in sectors like finance and healthcare demand more than the nondeterministic behavior we’ve become accustomed to. 


Strategic? Functional? Tactical? Which type of CISO are you?

Various factors influence what type of CISO a company may need, says Patton, a former CISO now working as a cybersecurity executive advisor at Cisco. A large, older company with a big, complicated tech stack will need someone with different skills, experience, and leadership qualities than a cloud-native startup that’s rapidly growing and changing. A heavily regulated industry such as financial services, healthcare, or utilities needs someone steeped in how to navigate all the compliance requirements. ... The path professionals take to the CISO seat also influences what type or types of CISOs they tend to be, adds Matt Stamper, CEO, CISO, and executive advisor with Executive Advisors Group as well as a board member with the ISACA San Diego chapter. Different career paths forge different types of executives, he says. Those who advanced through technical roles typically retain a technology bent, while those who came up through governance and risk functions usually gravitate toward compliance-focused roles. ... “CISOs should and tend to lean into where they’re gifted,” says Jenai Marinkovic, vCISO and CTO with Tiro Security and a member of the Emerging Trends Working Group with the IT governance association ISACA.


Becoming Ransomware Ready: Why Continuous Validation Is Your Best Defense

With the nature of IOCs being subtle and intentionally difficult to detect, how do you know that your XDR is effectively knipping them all in the bud? You hope that it is, but security leaders are using continuous ransomware validation to get a lot more certainty than that. By safely emulating the full ransomware kill chain - from initial access and privilege escalation to encryption attempts - tools like Pentera validate whether security controls, including EDR and XDR solutions, trigger the necessary alerts and responses. If key IOCs like shadow copy deletion, and process injection go undetected, then that's a crucial flag to prompt security teams to fine-tune detection rules and response workflows. ... Here's the reality: testing your defenses once a year leaves you exposed the other 364 days. Ransomware is constantly evolving, and so are the Indicators of Compromise (IOCs) used in attacks. Can you say with certainty that your EDR is detecting every IOC it should? The last thing you need to stress about is how threats are constantly changing into something your security tools will fail to recognize and aren't prepared to handle. That's why continuous ransomware validation is essential. With an automated process, you can continuously test your defenses to ensure they stand up against the latest threats.


US intensifies scrutiny of the EU’s Digital Markets Act

The DMA introduced unprecedented restrictions and requirements for companies designated as “gatekeepers” in the digital market. These companies must comply with a strict set of rules designed to prevent unfair business practices and ensure market accessibility for smaller competitors. The Act mandates various requirements including interoperability for core platform services, restrictions on personal data combination across services, and prohibition of self-preferencing practices in rankings and search results. “Big tech’s designated platforms can no longer unfairly promote their own products or services above yours (EU-based companies) in search results or ads,” one of the clauses of the DMA says pertaining to offering level playing. ... Meanwhile, the European Commission — where Ribera serves as the second-highest ranking official under President Ursula von der Leyen — maintains that these regulations are not targeted at US companies, according to the report. The Commission argued that the DMA is designed to ensure fair competition and consumer choice in digital markets, regardless of companies’ national origin. However, the predominance of US firms among those affected has intensified transatlantic tensions over digital policy.


The Technology Blueprint for CIOs: Expectations and Concerns

"Security sits at the front and center of business innovations, especially in sectors like banking and finance, where protecting user data and privacy is paramount. Every sector has its own unique challenges and opportunities, making a sector-driven approach essential," said Sachin Tayal, managing director at Protiviti member firm for India. AI-powered fraud detection systems are now integral, using behavior biometrics and facial recognition to detect and mitigate threats such as UPI frauds. Decentralized finance is also gaining traction, with blockchain-based solutions modernizing core banking functions and facilitating secure, transparent digital transactions, the report found. ... The industrial manufacturing sector is embracing Industry 4.0, characterized by the convergence of AI, IoT and cloud technologies. The market is seeing a shift toward digital twins and real-time analytics to optimize production processes. The integration of autonomous mobile robots and collaborative robots, cobots, is enhancing efficiency and safety on the production floor, the report said. ... CIOs have their work cut out - innovate or risk getting redundant. "Technology is driving businesses today, and the transformative role of the CIO amid the rapid rise of AI and digital innovations has never been more critical. The CIO now wears many hats - CTO, CISO and even CEO - as roles evolve to meet the demands of a digital-first world," Gupta said.


Man vs. machine: Striking the perfect balance in threat intelligence

One of the key things you must be aware of is your unconscious biases. Because we all have them. But being able to understand that and implement practices that challenge your assumptions, analysis and hypotheses is key to providing the best intelligence product. I think it’s a fascinating problem, particularly as it’s not necessarily something a SOC analyst or a vulnerability manager may consider, because it’s not really a part of their job to think that way, right? Fortunately, when it comes to working with the AI data, we can apply things like system prompts, we can be explicit in what we want to see as the output, and we can ask it to demonstrate where and why findings are identified, and their possible impact. Alongside that, I think the question also demonstrates the importance on why we as humans can’t forego things like training or maintaining skills. ... It’s also important that security continues to be a business enabler. There are times we interact with websites in countries that may have questionable points of view or human rights records. Does the AI block those countries because the training data indicates it shouldn’t support or provide access? Now some organisations will do domain blocking to an extreme level and require processes and approvals to access a website, it’s archaic and ridiculous in my opinion. Can AI help in that space? Almost certainly. 


AI and the Future of Software Testing: Will Human Testers Become Obsolete?

With generative AI tools, it has become possible to produce software testing code automatically. QA engineers can simply describe what they want to test and specify a testing framework, tool, or language, then let generative AI do the tedious work of writing out the code. Test engineers often need to validate and tweak the AI-generated code, just as software developers most often rework some parts of application code produced by AI. But by writing unit tests and other software tests automatically, AI can dramatically reduce the time that QA engineers spend creating tests. ... AI tools can also assist in evaluating test results. This is important because, in the past, a test failure typically meant that a QA engineer had to sit down with developers, figure out why the test failed, and formulate a plan for fixing whichever flaw triggered the issue. AI can automate this process in many cases by evaluating test results and corresponding application code and then making recommendations about how to fix an issue. Although it's not realistic to expect AI to be capable of entirely automating all software test assessments, it can do much of the tedious work. ... At the same time, though, AI will almost certainly reduce the need for human software testers, which could lead to some job losses in this area. 


From Convenience to Vulnerability: The Dual Role of APIs in Modern Services

Recently, a non-exploited vulnerability was discovered within a popular Travel Service that could have enabled attackers to take over victim accounts with a single click. Such an attack is called an "API Supply Chain Attack," in which an attacker chooses to attack a weaker link in the service's API ecosystem. While the takeover could occur within the integrated service, it likely would have provided attackers full access to the user's personally identifiable information (PII) from the main account, including all mileage and rewards data. Beyond mere data exposure, attackers could perform actions on behalf of the user, such as creating orders or modifying account details. This critical risk highlights the vulnerabilities in third-party integrations and the importance of stringent security protocols to protect users from unauthorized account access and manipulation. Vigilance, governance, and explicit control of APIs are essential for safeguarding against security gaps and vulnerabilities within API ecosystems. Organizations must prioritize investing in comprehensive API tools and software that support the entire API lifecycle. This includes identifying and cataloging all APIs in use to ensure visibility and control, continuously assessing and improving the security posture of APIs to mitigate risks, and implementing robust security measures to detect and respond to potential threats targeting APIs. 


Scientists Tested AI For Cognitive Decline. The Results Were a Shock.

Today, the famous large language model (LLM) is just one of several leading programs that appear convincingly human in their responses to basic queries. That uncanny resemblance may extend further than intended, with researchers from Israel now finding LLMs suffer a form of cognitive impairment similar to decline in humans, one that is more severe among earlier models. The team applied a battery of cognitive assessments to publicly available 'chatbots': versions 4 and 4o of ChatGPT, two versions of Alphabet's Gemini, and version 3.5 of Anthropic's Claude. Were the LLMs truly intelligent, the results would be concerning. In their published paper, neurologists Roy Dayan and Benjamin Uliel from Hadassah Medical Center and Gal Koplewitz, a data scientist at Tel Aviv University, describe a level of "cognitive decline that seems comparable to neurodegenerative processes in the human brain." For all of their personality, LLMs have more in common with the predictive text on your phone than the principles that generate knowledge using the squishy grey matter inside our heads. What this statistical approach to text and image generation gains in speed and personability, it loses in gullibility, building code according to algorithms that struggle to sort meaningful snippets of text from fiction and nonsense.


6 reasons so many IT orgs fail to exceed expectations today

“CIOs at large organizations know what they’ve got to hit. They know what they have to do to exceed expectations. But it’s more common that CIOs at smaller and less mature organizations have unclear objectives,” says Mark Taylor, CEO of the Society for Information Management (SIM). ... Doing all that work around expectation setting may still not be enough, as CIOs frequently find that the expectations set for them and their teams can shift suddenly. “Those moving targets happen all the time, especially when it comes to innovation,” says Peter Kreutter, WHU Otto Beisheim School of Management’s CIO Leadership Excellence Program faculty director and a member of the board of trustees for CIO Stiftung. ... “Fundamental challenges, such as legacy technology infrastructure and rigid operating cost structures, were at the core of failure rates,” the report reads. “These frequently limited the effectiveness of margin improvement initiatives and their impact on the bottom line. Unfortunately, this may only get worse, with uncertainty as a constant and the push for gen AI and data across enterprises.” ... Confusion about accountability — that is, who is really accountable for what results — is another obstacle for CIOs and IT teams as they aim high, according to Swartz.