Daily Tech Digest - January 31, 2025


Quote for the day:

“If you genuinely want something, don’t wait for it–teach yourself to be impatient.” -- Gurbaksh Chahal


GenAI fueling employee impersonation with biometric spoofs and counterfeit ID fraud

The annual AuthenticID report underlines the surging wave of AI-powered identity fraud, with rising biometric spoofs and counterfeit ID fraud attempts. The 2025 State of Identity Fraud Report also looks at how identity verification tactics and technology innovations are tackling the problem. “In 2024, we saw just how sophisticated fraud has now become: from deepfakes to sophisticated counterfeit IDs, generative AI has changed the identity fraud game,” said Blair Cohen, AuthenticID founder and president. ... “In 2025, businesses should embrace the mentality to ‘think like a hacker’ to combat new cyber threats,” said Chris Borkenhagen, chief digital officer and information security officer at AuthenticID. “Staying ahead of evolving strategies such as AI deepfake-generated documents and biometrics, emerging technologies, and bad actor account takeover tactics are crucial in protecting your business, safeguarding data, and building trust with customers.” ... Face biometric verification company iProov has identified the Philippines as a particular hotspot for digital identity fraud, with corresponding need for financial institutions and consumers to be vigilant. “There is a massive increase at the moment in terms of identity fraud against systems using generative AI in particular and deepfakes,” said iProove chief technology officer Dominic Forrest.


Cyber experts urge proactive data protection strategies

"Every organisation must take proactive measures to protect the critical data it holds," Montel stated. Emphasising foundational security practices, he advised organisations to identify their most valuable information and protect potential attack paths. He noted that simple steps can drastically contribute to overall security. On the consumer front, Montel highlighted the pervasive nature of data collection, reminding individuals of the importance of being discerning about the personal information they share online. "Think before you click," he advised, underscoring the potential of openly shared public information to be exploited by cybercriminals. Adding to the discussion on data resilience, Darren Thomson, Field CTO at Commvault, emphasised the changing landscape of cyber defence and recovery strategies needed by organisations. Thompson pointed out that mere defensive measures are not sufficient; rapid recovery processes are crucial to maintain business resilience in the event of a cyberattack. The concept of a "minimum viable company" is pivotal, where businesses ensure continuity of essential operations even when under attack. With cybercriminal tactics becoming increasingly sophisticated, doing away with reliance solely on traditional backups is necessary. 


Trump Administration Faces Security Balancing Act in Borderless Cyber Landscape

The borderless nature of cyber threats and AI, the scale of worldwide commerce, and the globally interconnected digital ecosystem pose significant challenges that transcend partisanship. As recent experience makes us all too aware, an attack originating in one country, state, sector, or company can spread almost instantaneously, and with devastating impact. Consequently, whatever the ideological preferences of the Administration, from a pragmatic perspective cybersecurity must be a collaborative national (and international) activity, supported by regulations where appropriate. It’s an approach taken in the European Union, whose member states are now subject to the Second Network Information Security Directive (NIS2)—focused on critical national infrastructure and other important sectors—and the financial sector-focused Digital Operational Resilience Act (DORA). Both regulations seek to create a rising tide of cyber resilience that lifts all ships and one of the core elements of both is a focus on reporting and threat intelligence sharing. In-scope organizations are required to implement robust measures to detect cyber attacks, report breaches in a timely way, and, wherever possible, share the information they accumulate on threats, attack vectors, and techniques with the EU’s central cybersecurity agency (ENISA).


Infrastructure as Code: From Imperative to Declarative and Back Again

Today, tools like Terraform CDK (TFCDK) and Pulumi have become popular choices among engineers. These tools allow developers to write IaC using familiar programming languages like Python, TypeScript, or Go. At first glance, this is a return to imperative IaC. However, under the hood, they still generate declarative configurations — such as Terraform plans or CloudFormation templates — that define the desired state of the infrastructure. Why the resurgence of imperative-style interfaces? The answer lies in a broader trend toward improving developer experience (DX), enabling self-service, and enhancing accessibility. Much like the shifts we’re seeing in fields such as platform engineering, these tools are designed to streamline workflows and empower developers to work more effectively. ... The current landscape represents a blending of philosophies. While IaC tools remain fundamentally declarative in managing state and resources, they increasingly incorporate imperative-like interfaces to enhance usability. The move toward imperative-style interfaces isn’t a step backward. Instead, it highlights a broader movement to prioritize developer accessibility and productivity, aligning with the emphasis on streamlined workflows and self-service capabilities.


How to Train AI Dragons to Solve Network Security Problems

We all know AI’s mantra: More data, faster processing, large models and you’re off to the races. But what if a problem is so specific — like network or DDoS security — that it doesn’t have a lot of publicly or privately available data you can use to solve it? As with other AI applications, the quality of the data you feed an AI-based DDoS defense system determines the accuracy and effectiveness of its solutions. To train your AI dragon to defend against DDoS attacks, you need detailed, real-world DDoS traffic data. Since this data is not widely and publicly available, your best option is to work with experts who have access to this data or, even better, have analyzed and used it to train their own AI dragons. To ensure effective DDoS detection, look at real-world, network-specific data and global trends as they apply to the network you want to protect. This global perspective adds valuable context that makes it easier to detect emerging or worldwide threats. ... Predictive AI models shine when it comes to detecting DDoS patterns in real-time. By using machine learning techniques such as time-series analysis, classification and regression, they can recognize patterns of attacks that might be invisible to human analysts. 


How law enforcement agents gain access to encrypted devices

When a mobile device is seized, law enforcement can request the PIN, password, or biometric data from the suspect to access the phone if they believe it contains evidence relevant to an investigation. In England and Wales, if the suspect refuses, the police can give a notice for compliance, and a further refusal is in itself a criminal offence under the Regulation of Investigatory Powers Act (RIPA). “If access is not gained, law enforcement use forensic tools and software to unlock, decrypt, and extract critical digital evidence from a mobile phone or computer,” says James Farrell, an associate at cyber security consultancy CyXcel. “However, there are challenges on newer devices and success can depend on the version of operating system being used.” ... Law enforcement agencies have pressured companies to create “lawful access” solutions, particularly on smartphones, to take Apple as an example. “You also have the co-operation of cloud companies, which if backups are held can sidestep the need to break the encryption of a device all together,” Closed Door Security’s Agnew explains. The security community has long argued against law enforcement backdoors, not least because they create security weaknesses that criminal hackers might exploit. “Despite protests from law enforcement and national security organizations, creating a skeleton key to access encrypted data is never a sensible solution,” CreateFuture’s Watkins argues.


The quantum computing reality check

Major cloud providers have made quantum computing accessible through their platforms, which creates an illusion of readiness for enterprise adoption. However, this accessibility masks a fatal flaw: Most quantum computing applications remain experimental. Indeed, most require deep expertise in quantum physics and specialized programming knowledge. Real-world applications are severely limited, and the costs are astronomical compared to the actual value delivered. ... The timeline to practical quantum computing applications is another sobering reality. Industry experts suggest we’re still 7 to 15 years away from quantum systems capable of handling production workloads. This extended horizon makes it difficult to justify significant investments. Until then, more immediate returns could be realized through existing technologies. ... The industry’s fascination with quantum computing has made companies fear being left behind or, worse, not being part of the “cool kids club”; they want to deliver extraordinary presentations to investors and customers. We tend to jump into new trends too fast because the allure of being part of something exciting and new is just too compelling. I’ve fallen into this trap myself. ... Organizations must balance their excitement for quantum computing with practical considerations about immediate business value and return on investment. I’m optimistic about the potential value in QaaS. 


Digital transformation in banking: Redefining the role of IT-BPM services

IT-BPM services are the engine of digital transformation in banking. They streamline operations through automation technologies like RPA, enhancing efficiency in processes such as customer onboarding and loan approvals. This automation reduces errors and frees up staff for strategic tasks like personalised customer support. By harnessing big data analytics, IT-BPM empowers banks to personalise services, detect fraud, and make informed decisions, ultimately improving both profitability and customer satisfaction. Robust security measures and compliance monitoring are also integral, ensuring the protection of sensitive customer data in the increasingly complex digital landscape. ... IT-BPM services are crucial for creating seamless, multi-channel customer experiences. They enable the development of intuitive platforms, including AI-driven chatbots and mobile apps, providing instant support and convenient financial management. This focus extends to personalised services tailored to individual customer needs and preferences, and a truly integrated omnichannel experience across all banking platforms. Furthermore, IT-BPM fosters agility and innovation by enabling rapid development of new digital products and services and facilitating collaboration with fintech companies.


Revolutionizing data management: Trends driving security, scalability, and governance in 2025

Artificial Intelligence and Machine Learning transform traditional data management paradigms by automating labour-intensive processes and enabling smarter decision-making. In the upcoming years, augmented data management solutions will drive efficiency and accuracy across multiple domains, from data cataloguing to anomaly detection. AI-driven platforms process vast datasets to identify patterns, automating tasks like metadata tagging, schema creation and data lineage mapping. ... In 2025, data masking will not be merely a compliance tool for GDPR, HIPPA, or CCPA; it will be a strategic enabler. With the rise in hybrid and multi-cloud environments, businesses will increasingly need to secure sensitive data across diverse systems. Specific solutions like IBM, K2view, Oracle and Informatica will revolutionize data masking by offering scale-based, real-time, context-aware masking. ... Real-time integration enhances customer experiences through dynamic pricing, instant fraud detection, and personalized recommendations. These capabilities rely on distributed architectures designed to handle diverse data streams efficiently. The focus on real-time integration extends beyond operational improvements. 


Deploying AI at the edge: The security trade-offs and how to manage them

The moment you bring compute nodes into the far edge, you’re automatically exposing a lot of security challenges in your network. Even if you expect them to be “disconnected devices,” they could intermittently connect to transmit data. So, your security footprint is expanded. You must ensure that every piece of the stack you’re deploying at the edge is secure and trustworthy, including the edge device itself. When considering security for edge AI, you have to think about transmitting the trained model, runtime engine, and application from a central location to the edge, opening up the opportunity for a person-in-the-middle attack. ... In military operations, continuous data streams from millions of global sensors generate an overwhelming volume of information. Cloud-based solutions are often inadequate due to storage limitations, processing capacity constraints, and unacceptable latency. Therefore, edge computing is crucial for military applications, enabling immediate responses and real-time decision-making. In commercial settings, many environments lack reliable or affordable connectivity. Edge AI addresses this by enabling local data processing, minimizing the need for constant communication with the cloud. This localized approach enhances security. Instead of transmitting large volumes of raw data, only essential information is sent to the cloud. 


Daily Tech Digest - January 30, 2025


Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley


Doing authentication right

Like encryption, authentication is one of those things that you are tempted to “roll your own” but absolutely should not. The industry has progressed enough that you should definitely “buy and not build” your authentication solution. Plenty of vendors offer easy-to-implement solutions and stay diligently on top of the latest security issues. Authentication also becomes a tradeoff between security and a good user experience. ... Passkeys are a relatively new technology and there is a lot of FUD floating around out there about them. The bottom line is that they are safe, secure, and easy for your users. They should be your primary way of authenticating. Several vendors make implementing passkeys not much harder than inserting a web component in your application. ... Forcing users to use hard-to-remember passwords means they will be more likely to write them down or use a simple password that meets the requirements. Again, it may seem counterintuitive, but XKCD has it right. In addition, the longer the password, the harder it is to crack. Let your users create long, easy-to-remember passwords rather than force them to use shorter, difficult-to-remember passwords. ... Six digits is the outer limit for OTP links, and you should consider shorter ones. Under no circumstances should you require OTPs longer than six digits because they are vastly harder for users to keep in short-term memory.


Augmenting Software Architects with Artificial Intelligence

Technical debt is mistakenly thought of as just a source code problem, but the concept is also applicable to source data (this is referred to as data debt) as well as your validation assets. AI has been used for years to analyze existing systems to identify potential opportunities to improve the quality (to pay down technical debt). SonarQube, CAST SQG and BlackDuck’s Coverity Static Analysis statically analyze existing code. Applitools Visual AI dynamically finds user interface (UI) bugs and Veracode’s DAST to find runtime vulnerabilities in web apps. The advantages of this use case are that it pinpoints aspects of your implementation that potentially should be improved. As described earlier, AI tooling offers to the potential for greater range, thoroughness, and trustworthiness of the work products as compared with that of people. Drawbacks to using AI-tooling to identify technical debt include the accuracy, IP, and privacy risks described above. ... As software architects we regularly work with legacy implementations that they need to leverage and often evolve. This software is often complex, using a myriad of technologies for reasons that have been forgotten over time. Tools such as CAST Imaging visualizes existing code and ChartDB visualizes legacy data schemas to provide a “birds-eye view” of the actual situation that you face.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

Your first step should be to evaluate the state of your company’s cyber defenses, including communications and IT infrastructure, and the cybersecurity measures you already have in place—identifying any vulnerabilities and gaps. One vulnerability to watch for is a dependence on multiple security platforms, patches, policies, hardware, and software, where a lack of tight integration can create gaps that hackers can readily exploit. Consider using operational resilience assessment software as part of the exercise, and if you lack the internal know-how or resources to manage the assessment, consider enlisting a third-party operational resilience risk consultant. ... Aging network communications hardware and software, including on-premises systems and equipment, are top targets for hackers during a disaster because they often include a single point of failure that’s readily exploitable. The best counter in many cases is to move the network and other key communications infrastructure (a contact center, for example) to the cloud. Not only do cloud-based networks such as SD-WAN, (software-defined wide area network) have the resilience and flexibility to preserve connectivity during a disaster, they also tend to come with built-in cybersecurity measures.


California’s AG Tells AI Companies Practically Everything They’re Doing Might Be Illegal

“The AGO encourages the responsible use of AI in ways that are safe, ethical, and consistent with human dignity,” the advisory says. “For AI systems to achieve their positive potential without doing harm, they must be developed and used ethically and legally,” it continues, before dovetailing into the many ways in which AI companies could, potentially, be breaking the law. ... There has been quite a lot of, shall we say, hyperbole, when it comes to the AI industry and what it claims it can accomplish versus what it can actually accomplish. Bonta’s office says that, to steer clear of California’s false advertising law, companies should refrain from “claiming that an AI system has a capability that it does not; representing that a system is completely powered by AI when humans are responsible for performing some of its functions; representing that humans are responsible for performing some of a system’s functions when AI is responsible instead; or claiming without basis that a system is accurate, performs tasks better than a human would, has specified characteristics, meets industry or other standards, or is free from bias.” ... Bonta’s memo clearly illustrates what a legal clusterfuck the AI industry represents, though it doesn’t even get around to mentioning U.S. copyright law, which is another legal gray area where AI companies are perpetually running into trouble.


Knowledge graphs: the missing link in enterprise AI

Knowledge graphs are a layer of connective tissue that sits on top of raw data stores, turning information into contextually meaningful knowledge. So in theory, they’d be a great way to help LLMs understand the meaning of corporate data sets, making it easier and more efficient for companies to find relevant data to embed into queries, and making the LLMs themselves faster and more accurate. ... Knowledge graphs reduce hallucinations, he says, but they also help solve the explainability challenge. Knowledge graphs sit on top of traditional databases, providing a layer of connection and deeper understanding, says Anant Adya, EVP at Infosys. “You can do better contextual search,” he says. “And it helps you drive better insights.” Infosys is now running proof of concepts to use knowledge graphs to combine the knowledge the company has gathered over many years with gen AI tools. ... When a knowledge graph is used as part of the RAG infrastructure, explicit connections can be used to quickly zero in on the most relevant information. “It becomes very efficient,” said Duvvuri. And companies are taking advantage of this, he says. “The hard question is how many of those solutions are seen in production, which is quite rare. But that’s true of a lot of gen AI applications.”


U.S. Copyright Office says AI generated content can be copyrighted — if a human contributes to or edits it

The Copyright Office determined that prompts are generally instructions or ideas rather than expressive contributions, which are required for copyright protection. Thus, an image generated with a text-to-image AI service such as Midjourney or OpenAI’s DALL-E 3 (via ChatGPT), on its own could not qualify for copyright protection. However, if the image was used in conjunction with a human-authored or human-edited article (such as this one), then it would seem to qualify. Similarly, for those looking to use AI video generation tools such as Runway, Pika, Luma, Hailuo, Kling, OpenAI Sora, Google Veo 2 or others, simply generating a video clip based on a description would not qualify for copyright. Yet, a human editing together multiple AI generated video clips into a new whole would seem to qualify. The report also clarifies that using AI in the creative process does not disqualify a work from copyright protection. If an AI tool assists an artist, writer or musician in refining their work, the human-created elements remain eligible for copyright. This aligns with historical precedents, where copyright law has adapted to new technologies such as photography, film and digital media. ... While some had called for additional protections for AI-generated content, the report states that existing copyright law is sufficient to handle these issues.


From connectivity to capability: The next phase of private 5G evolution

Faster connectivity is just one positive aspect of private 5G networks; they are the basis of the current digital era. These networks outperform conventional public 5G capabilities, giving businesses incomparable control, security, and flexibility. For instance, private 5G is essential to the seamless connection of billions of devices, ensuring ultra-low latency and excellent reliability in the worldwide IoT industry, which has the potential to reach $650.5 billion by 2026, as per Markets and Markets. Take digital twins, for example—virtual replicas of physical environments such as factories or entire cities. These replicas require real-time data streaming and ultra-reliable bandwidth to function effectively. Private 5G enables this by delivering consistent performance, turning theoretical models into practical tools that improve operational efficiency and decision-making. ... Also, for sectors that rely on efficiency and precision, the private 5G is making big improvements in this area. For instance, in the logistics sector, it connects fleets, warehouses, and ports with fast, low-latency networks, streamlining operations throughout the supply chain. In fleet management, private 5G allows real-time tracking of vehicles, improving route planning and fuel use. 


American CISOs should prepare now for the coming connected-vehicle tech bans

The rule BIS released is complex and intricate and relies on many pre-existing definitions and policies used by the Commerce Department for different commercial and industrial matters. However, in general, the restrictions and compliance obligations under the rule affect the entire US automotive industry, including all-new, on-road vehicles sold in the United States (except commercial vehicles such as heavy trucks, for which rules will be determined later.) All companies in the automotive industry, including importers and manufacturers of CVs, equipment manufacturers, and component suppliers, will be affected. BIS said it may grant limited specific authorizations to allow mid-generation CV manufacturers to participate in the rule’s implementation period, provided that the manufacturers can demonstrate they are moving into compliance with the next generation. ... Connected vehicles and related component suppliers are required to scrutinize the origins of vehicle connectivity systems (VCS) hardware and automated driving systems (ADS) software to ensure compliance. Suppliers must exclude components with links to the PRC or Russia, which has significant implications for sourcing practices and operational processes.


What to know about DeepSeek AI, from cost claims to data privacy

"Users need to be aware that any data shared with the platform could be subject to government access under China's cybersecurity laws, which mandate that companies provide access to data upon request by authorities," Adrianus Warmenhoven, a member of NordVPN's security advisory board, told ZDNET via email. According to some observers, the fact that R1 is open-source means increased transparency, giving users the opportunity to inspect the model's source code for signs of privacy-related activity. Regardless, DeepSeek also released smaller versions of R1, which can be downloaded and run locally to avoid any concerns about data being sent back to the company (as opposed to accessing the chatbot online). ... "DeepSeek's new AI model likely does use less energy to train and run than larger competitors' models," confirms Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "However, I doubt this marks the start of a long-term trend in lower energy consumption. AI's power stems from data, algorithms, and compute -- which rely on ever-improving chips. When developers have previously found ways to be more efficient, they have typically reinvested those gains into making even bigger, more powerful models, rather than reducing overall energy usage."


The AI Imperative: How CIOs Can Lead the Charge

For CIOs, AGI will take this to the next level. Imagine systems that don't just fix themselves but also strategize, optimize and innovate. AGI could automate 90% of IT operations, freeing up teams to focus on strategic initiatives. It could revolutionize cybersecurity by anticipating and neutralizing threats before they strike. It could transform data into actionable insights, driving smarter decisions across the organization. The key is to begin incrementally, prove the value and scale strategically. AGI isn't just a tool; it's a game-changer. ... Cybersecurity risks are real and imminent. Picture this: you're using an open-source AI model and suddenly, your system gets hacked. Turns out, a malicious contributor slipped in some rogue code. Sounds like a nightmare, right? Open-source AI is powerful, but has its fair share of risks. Vulnerabilities in the code, supply chain attacks and lack of appropriate vendor support are absolutely real concerns. But this is true for any new technology. With the right safeguards, we can minimize and mitigate these risks. Here's what I recommend: Regularly review and update open-source libraries. CIOs should encourage their teams to use tools like software composition analysis to detect suspicious changes. Train your team to manage and secure open-source AI deployments. 

Daily Tech Digest - January 29, 2025


Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer


Evil Models and Exploits: When AI Becomes the Attacker

A more structured threat emerges with technologies like the Model Context Protocol (MCP). Originally introduced by Anthropic, MCP allows large language models (LLMs) to interact with host machines via JavaScript APIs. This enables LLMs to perform sophisticated operations by controlling local resources and services. While MCP is being embraced by developers for legitimate use cases, such as automation and integration, its darker implications are clear. An MCP-enabled system could orchestrate a range of malicious activities with ease. Think of it as an AI-powered operator capable of executing everything from reconnaissance to exploitation. ... The proliferation of AI models is both a blessing and a curse. Platforms like Hugging Face host over a million models, ranging from state-of-the-art neural networks to poorly designed or maliciously altered versions. Amid this abundance lies a growing concern: model provenance. Imagine a widely used model, fine-tuned by a seemingly reputable maintainer, turning out to be a tool of a state actor. Subtle modifications in the training data set or architecture could embed biases, vulnerabilities or backdoors. These “evil models” could then be distributed as trusted resources, only to be weaponized later. This risk underscores the need for robust mechanisms to verify the origins and integrity of AI models.


The tipping point for Generative AI in banking

Advancements in AI are allowing banks and other fintechs to embed the technology across their entire value chain. For example, TBC is leveraging AI to make 42% of all payment reminder calls to customers with loans that are up to 30 days or less overdue and is getting ready to launch other AI-enabled solutions. Customers normally cannot differentiate the AI calls powered by our tech from calls by humans, even as the AI calls are ten times more efficient for TBC’s bottom line, compared with human operator calls. Klarna rolled out an AI assistant, which handled 2.3 million conversations in its first month of operation, which accounts for two-thirds of Klarna’s customer service chats or the workload of 700 full-time agents, the company estimated. Deutsche Bank leverages generative AI for software creation and managing adverse media, while the European neobank Bunq applies it to detect fraud. Even smaller regional players, provided they have the right tech talent in place, will soon be able to deploy Gen AI at scale and incorporate the latest innovations into their operations. Next year is set to be a watershed year when this step change will create a clear division in the banking sector between AI-enabled champions and other players that will soon start lagging behind. 


Want to be an effective cybersecurity leader? Learn to excel at change management

Security should never be an afterthought; the change management process shouldn’t be, either, says Michael Monday, a managing director in the security and privacy practice at global consulting firm Protiviti. “The change management process should start early, before changing out the technology or process,” he says. “There should be some messages going out to those who are going to be impacted letting them know, [otherwise] users will be surprised, they won’t know what’s going on, business will push back and there will be confusion.” ... “It’s often the CISO who now has to push these new things,” says Moyle, a former CISO, founding partner of the firm SecurityCurve, and a member of the Emerging Trends Working Group with the professional association ISACA. In his experience, Moyle says he has seen some workers more willing to change than others and learned to enlist those workers as allies to help him achieve his goals. ... When it comes to the people portion, she tells CISOs to “feed supporters and manage detractors.” As for process, “identify the key players for the security program and understand their perspective. There are influencers, budget holders, visionaries, and other stakeholders — each of which needs to be heard, and persuaded, especially if they’re a detractor.”


Preparing financial institutions for the next generation of cyber threats

Collaboration between financial institutions, government agencies, and other sectors is crucial in combating next-generation threats. This cooperative approach enhances the ability to detect, respond to, and mitigate sophisticated threats more effectively. Visa regularly works with international agencies of all sizes to bring cybercriminals to justice. In fact, Visa regularly works alongside law enforcement, including the US Department of Justice, FBI, Secret Service and Europol, to help identify and apprehend fraudsters and other criminals. Visa uses its AI and ML capabilities to identify patterns of fraud and cybercrime and works with law enforcement to find these bad actors and bring them to justice. ... Financial institutions face distinct vulnerabilities compared to other industries, particularly due to their role in critical infrastructure and financial ecosystems. As high-value targets, they manage large sums of money and sensitive information, making them prime targets for cybercriminals. Their operations involve complex and interconnected systems, often including legacy technologies and numerous third-party vendors, which can create security gaps. Regulatory and compliance challenges add another layer of complexity, requiring stringent data protection measures to avoid hefty fines and maintain customer trust.


Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025

Enterprises increasingly turned to AI-native security solutions, employing continuous multi-factor authentication and identity verification tools. These technologies monitor behavioral patterns or other physical world signals to prove identity —innovations that can now help prevent incidents like the North Korean hiring scheme. However, hackers may now gain another inside route to enterprise security. The new breed of unregulated and offshore LLMs like DeepSeek creates new opportunities for attackers. In particular, using DeepSeek’s AI model gives attackers a powerful tool to better discover and take advantage of the cyber vulnerabilities of any organization. ... Deepfake technology continues to blur the lines between reality and fiction. ... Organizations must combat the increasing complexity of identity fraud, hackers, cyber security thieves, and data center poachers each year. In addition to all of the threats mentioned above, 2025 will bring an increasing need to address IoT and OT security issues, data protection in the third-party cloud and AI infrastructure, and the use of AI agents in the SOC. To help thwart this year’s cyber threats, CISOs and CTOs must work together, communicate often, and identify areas to minimize risks for deepfake fraud across identity, brand protection, and employee verification.


The Product Model and Agile

First, the product model is not new; it’s been out there for more than 20 years. So I have never argued that the product model is “the next new thing,” as I think that’s not true. Strong product companies have been following the product model for decades, but most companies around the world have only recently been exposed to this model, which is why so many people think of it as new. Second, while I know this irritates many people, today there are very different definitions of what it even means to be “Agile.” Some people consider SAFe as Agile. If that’s what you consider Agile, then I would say that Agile plays no part in the product model, as SAFe is pretty much the antithesis of the product model. This difference is often characterized today as “fake Agile” versus “real Agile.” And to be clear, if you’re running XP, or Kanban, or Scrum, or even none of the Agile ceremonies, yet you are consistently doing continuous deployment, then at least as far as I’m concerned, you’re running “real Agile.” Third, we should separate the principles of Agile from the various, mostly project management, processes that have been set up around those principles. ... Finally, it’s also important to point out that there is one Agile principle that might be good enough for custom or contract software work, but is not sufficient for commercial product work. This is the principle that “working software is the primary measure of progress.”


Next Generation Observability: An Architectural Introduction

It's always a challenge when creating architectural content, trying to capture real-world stories into a generic enough format to be useful without revealing any organization's confidential implementation details. We are basing these architectures on common customer adoption patterns. That's very different from most of the traditional marketing activities usually associated with generating content for the sole purpose of positioning products for solutions. When you're basing the content on actual execution in solution delivery, you're cutting out the marketing chuff. This observability architecture provides us with a way to map a solution using open-source technologies focusing on the integrations, structures, and interactions that have proven to work at scale. Where those might fail us at scale, we will provide other options. What's not included are vendor stories, which are normal in most marketing content. Those stories that, when it gets down to implementation crunch time, might not fully deliver on their promises. Let's look at the next-generation observability architecture and explore its value in helping our solution designs. The first step is always to clearly define what we are focusing on when we talk about the next-generation observability architecture.


AI SOC Analysts: Propelling SecOps into the future

Traditional, manual SOC processes already struggling to keep pace with existing threats are far outpaced by automated, AI-powered attacks. Adversaries are using AI to launch sophisticated and targeted attacks putting additional pressure on SOC teams. To defend effectively, organizations need AI solutions that can rapidly sort signals from noise and respond in real time. AI-generated phishing emails are now so realistic that users are more likely to engage with them, leaving analysts to untangle the aftermath—deciphering user actions and gauging exposure risk, often with incomplete context. ... The future of security operations lies in seamless collaboration between human expertise and AI efficiency. This synergy doesn't replace analysts but enhances their capabilities, enabling teams to operate more strategically. As threats grow in complexity and volume, this partnership ensures SOCs can stay agile, proactive, and effective. ... Triaging and investigating alerts has long been a manual, time-consuming process that strains SOC teams and increases risk. Prophet Security changes that. By leveraging cutting-edge AI, large language models, and advanced agent-based architectures, Prophet AI SOC Analyst automatically triages and investigates every alert with unmatched speed and accuracy.


Apple researchers reveal the secret sauce behind DeepSeek AI

The ability to use only some of the total parameters of a large language model and shut off the rest is an example of sparsity. That sparsity can have a major impact on how big or small the computing budget is for an AI model. AI researchers at Apple, in a report out last week, explain nicely how DeepSeek and similar approaches use sparsity to get better results for a given amount of computing power. Apple has no connection to DeepSeek, but Apple does its own AI research on a regular basis, and so the developments of outside companies such as DeepSeek are part of Apple's continued involvement in the AI research field, broadly speaking. In the paper, titled "Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models," posted on the arXiv pre-print server, lead author Samir Abnar of Apple and other Apple researchers, along with collaborator Harshay Shah of MIT, studied how performance varied as they exploited sparsity by turning off parts of the neural net. ... Abnar and team ask whether there's an "optimal" level for sparsity in DeepSeek and similar models, meaning, for a given amount of computing power, is there an optimal number of those neural weights to turn on or off? It turns out you can fully quantify sparsity as the percentage of all the neural weights you can shut down, with that percentage approaching but never equaling 100% of the neural net being "inactive."


What Data Literacy Looks Like in 2025

“The foundation of data literacy lies in having a basic understanding of data. Non-technical people need to master the basic concepts, terms, and types of data, and understand how data is collected and processed,” says Li. “Meanwhile, data literacy should also include familiarity with data analysis tools. ... “Organizations should also avoid the misconception that fostering GenAI literacy alone will help developing GenAI solutions. For this, companies need even greater investments in expert AI talent -- data scientists, machine learning engineers, data engineers, developers and AI engineers,” says Carlsson. “While GenAI literacy empowers individuals across the workforce, building transformative AI capabilities requires skilled teams to design, fine-tune and operationalize these solutions. Companies must address both.” ... “Data literacy in 2025 can’t just be about enabling employees to work with data. It needs to be about empowering them to drive real business value,” says Jain. “That’s how organizations will turn data into dollars and ensure their investments in technology and training actually pay off.” ... “Organizations can embed data literacy into daily operations and culture by making data-driven thinking a core part of every role,” says Choudhary.

Daily Tech Digest - January 28, 2025


Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki


How Long Does It Take Hackers to Crack Modern Hashing Algorithms?

Because hashing algorithms are one-way functions, the only method to compromise hashed passwords is through brute force techniques. Cyber attackers employ special hardware like GPUs and cracking software (e.g., Hashcat, L0phtcrack, John The Ripper) to execute brute force attacks at scale—typically millions or billions or combinations at a time. Even with these sophisticated purpose-built cracking tools, password cracking times can vary dramatically depending on the specific hashing algorithm used and password length/character combination. ... With readily available GPUs and cracking software, attackers can instantly crack numeric passwords of 13 characters or fewer secured by MD5's 128-bit hash; on the other hand, an 11-character password consisting of numbers, uppercase/lowercase characters, and symbols would take 26.5 thousand years. ... When used with long, complex passwords, SHA256 is nearly impenetrable using brute force methods— an 11 character SHA256 hashed password using numbers, upper/lowercase characters, and symbols takes 2052 years to crack using GPUs and cracking software. However, attackers can instantly crack nine character SHA256-hashed passwords consisting of only numeric or lowercase characters.


Sharply rising IT costs have CIOs threading the needle on innovation

“Within two years, it will be virtually impossible to buy a PC, tablet, laptop, or mobile phone without AI,” Lovelock says. “Whether you want it or not, you’re going to get it sold to you.” Vendors have begun to build AI into software as well, he says, and in many cases, charge customers for the additional functionality. IT consulting services will also add AI-based services to their portfolios. ... But the biggest expected price hikes are for cloud computing services, despite years of expectations that cloud prices wouldn’t increase significantly, Lovelock says. “For many years, CIOs were taught that in the cloud, either prices went down, or you got more functionality, and occasionally both, that the economies of scale accrue to the cloud providers and allow for at least stable prices, if not declines or functional expansion,” he says. “It wasn’t until post-COVID in the energy crisis, followed by staff cost increases, when that story turned around.” ... “Generative AI is no longer seen as a one-size-fits-all solution, and this shift is helping both solutions providers and businesses take a more practical approach,” he says. “We don’t see this as a sign of lower expectations but as a move toward responsible and targeted use of generative AI.”


US takes aim at healthcare cybersecurity with proposed HIPAA changes

The major update to the HIPAA security regulations also requires healthcare organizations to strengthen security incident response plans and procedures, carry out annual penetration tests and compliance audits, among other measures. Many of the proposals cover best practice enterprise security guidelines foundational to any mature cybersecurity program. ... Cybersecurity experts praised the shift to a risk-based approach covered by the security rule revamp, while some expressed concerns that the measures might tax the financial resources of smaller clinics and healthcare providers. “The security measures called for in the proposed rule update are proven to be effective and will mitigate many of the risks currently present in the poorly protected environments of many healthcare payers, providers, and brokers,” said Maurice Uenuma, VP & GM for the Americas and security strategist at data security firm Blancco. ... Uenuma added: “The challenge will be to implement these measures consistently at scale.” Trevor Dearing, director of critical infrastructure at enterprise security tools firm Illumio, praised the shift from prevention to resilience and the risk-based approach implicit in the rule changes, which he compared to the EU’s recently introduced DORA rules for financial sector organizations.


Risk resilience: Navigating the risks that board’s can’t ignore in 2025

The geopolitical landscape is more turbulent than ever. Companies will need to prepare for potential shocks like regional conflicts, supply chain disruptions, or even another pandemic. If geopolitical risks feel dizzyingly complex, scenario planning will be a powerful tool in mapping out different political and economic scenarios. By envisioning various outcomes, boards can better understand their vulnerabilities, prepare tailored responses and enhance risk resilience. To prepare for the year ahead, board and management teams should ask questions such as: How exposed are we to geopolitical risks in our supply chain? Are we engaging effectively with local governments in key regions?  ... The risks of 2025 are formidable, but so are the opportunities for those who lead with purpose. With informed leadership and collaboration, we can navigate the complexities of the modern business environment with confidence and resilience. Resilience will be the defining trait of successful boards and businesses in the years ahead. It requires not only addressing known risks but also preparing for the unexpected. By prioritising scenario planning, fostering a culture of transparency, and aligning risk management with strategic goals, boards can navigate uncertainty with confidence.


Freedom from Cyber Threats: An AI-powered Republic on the Rise

Developing a resilient AI-driven cybersecurity infrastructure requires substantial investment. The Indian government’s allocation of over ₹550 crores to AI research demonstrates its commitment to innovation and data security. Collaborations with leading cybersecurity companies exemplify scalable solutions to secure digital ecosystems, prioritising resilience, ethical governance, and comprehensive data protection. Research tools like the Gartner Magic Quadrant also offer reliable and useful insights into the leading companies that offer the best and latest SIEM technology solutions. Upskilling the workforce is equally important. Training programs focused on AI-specific cybersecurity skills are preparing India’s talent pool to tackle future challenges effectively. ... Proactive strategies are essential to counter the evolution of cyber threats. Simulation tools enable organizations to anticipate and neutralise potential vulnerabilities. Now, cybersecurity threats can be intercepted by high-class threat detection SIEM data clouds and autonomous threat sweeps. Advanced threat research, conducted by dedicated labs within organisations, plays a crucial role in uncovering emerging attack vectors and providing actionable insights to pre-empt potential breaches. 


Enterprises are hitting a 'speed limit' in deploying Gen AI - here's why

The regulatory issue, the report states, makes clear "respondents' unease about which use cases will be acceptable, and to what extent their organizations will be held accountable for Gen AI-related problems." ... The latest iteration was conducted in July through September, and received 2,773 responses from "senior leaders in their organizations and included board and C-suite members, and those at the president, vice president, and director level," from 14 countries, including the US, UK, Brazil, Germany, Japan, Singapore, and Australia, and across industries including energy, finance, healthcare, and media and telecom. ... Despite the slow pace, Deloitte's CTO is confident in the continued development, and ultimate deployment, of Gen AI. "GenAI and AI broadly is our reality -- it's not going away," writes Bawa. Gen AI is ultimately like the Internet, cloud computing, and mobile waves that preceded it, he asserts. Those "transformational opportunities weren't uncovered overnight," he says, "but as they became pervasive, they drove significant disruption to business and technology capabilities, and also triggered many new business models, new products and services, new partnerships, and new ways of working and countless other innovations that led to the next wave across industries."


NVMe-oF Substantially Reduces Data Access Latency

NVMe-oF is a network protocol that extends the parallel access and low latency features of Nonvolatile Memory Express (NVMe) protocol across networked storage. Originally designed for local storage and common in direct-attached storage (DAS) architectures, NVMe delivers high-speed data access and low latency by directly interfacing with solid-state disks. NVMe-oF allows these same advantages to be achieved in distributed and clustered environments by enabling external storage to perform as if it were local. ... Storage targets can be dynamically shared among workloads, thus providing composable storage resources that provide flexibility, agility and greater resource efficiency. The adoption of NVMe-oF is evident across industries where high performance, efficiency and low latency at scale are critical. Notable market sectors include: financial services, e-commerce, AI and machine learning, and specialty cloud service providers (CSPs). Legacy VM migration, real-time analytics, high-frequency trading, online transaction processing (OLTP) and the rapid development of cloud native, performance-intensive workloads at scale are use cases that have compelled organizations to modernize their data platforms with NVMe-oF solutions. Its ability to handle massive data flows with efficiency and high-performance makes it indispensable for I/O-intensive workloads.


The crisis of AI’s hidden costs

Let me paint you a picture of what keeps CFOs up at night. Imagine walking into a massive data center where 87% of the computers sit there, humming away, doing nothing. Sounds crazy, right? That’s exactly what’s happening in your cloud environment. If you manage a typical enterprise cloud computing operation, you are wasting money. It’s not rare to see companies spend $1 million monthly on cloud resources, with 75% to 80% of that amount going right out the window. It’s no mystery what this means for your bottom line. ... Smart enterprises aren’t just hoping the problem will disappear; they’re taking action. Here’s my advice: Don’t rely solely on the basic tools offered by your cloud provider; they won’t give you the immediate cost visibility you need. Instead, invest in third-party solutions that provide a clear, up-to-the-minute picture of your resource utilization. Focus on power-hungry GPUs running AI workloads. ... Rather than spinning up more instances, consider rightsizing. Modern instance types offered by public cloud providers can give you more bang for your buck. ... Predictive analytics can help you scale up or down based on demand, ensuring you’re not paying for idle resources. ... Be strategic and look at the bigger picture. Evaluate reserved instances and savings plans to balance cost and performance. 


AI security posture management will be needed before agentic AI takes hold

We’ve run into these issues when most companies shifted their workloads to the cloud. Authentication issues – like the dreaded S3 bucket that had a default public setting and that was the cause of way too many breaches before it was secure by default – became the domain of cloud security posture management (CSPM) tools before they were swallowed up by the CNAPP acronym. Identity and permission issues (or entitlements, if you prefer) became the alphabet soup of CIEM (cloud identity entitlement management), thankfully now also under the umbrella of CNAPP. AI bots will need to be monitored by similar toolsets, but they don’t exist yet. I’ll go out on a limb and suggest SAFAI (pronounced Sah-fy) as an acronym: Security Assessment Frameworks for AI. These would, much like CNAPP tools, embed themselves in agentless or transparent fashion, crawl through your AI bots collecting configuration, authentication and permission issues and highlight the pain points. You’d still need the standard panoply of other tools to protect you, since they sit atop the same infrastructure. And that’s on top of worrying about prompt injection opportunities, which is something you unfortunately have no control over as they are based entirely on the models and how they are used.


Hackers Use Malicious PDFs, pose as USPS in Mobile Phishing Scam

The bad actors make the malicious PDFs look like communications from the USPS that are sent via SMS text messages and use what the researchers called in a report Monday a “never-before-seen means of obfuscation” to help them bypass traditional security controls. They embed the malicious links in the PDF, essentially hiding them from endpoint security solutions. ... The phishing attacks are part of a larger and growing trend of what Zimperium calls “mishing,” an umbrella word for campaigns that use email, text messages, voice calls, or QR codes that exploit such weaknesses as unsafe user behavior and minimal security on many mobile devices to infiltrate corporate networks and steal information. ... “We’re witnessing phishing evolve in real time beyond email into a sophisticated multi-channel threat, with attackers leveraging trusted brands like USPS, Royal Mail, La Poste, Deutsche Post, and Australian Post to exploit limited mobile device security worldwide,” Kowski said. “The discovery of over 20 malicious PDFs and 630 phishing pages targeting organizations across 50+ countries shows how threat actors capitalize on users’ trust in official-looking communications on mobile devices.” He also noted that internal disagreements are hampering corporations’ ability to protect against such attacks.


Daily Tech Digest - January 27, 2025


Quote for the day:

"Your problem isn't the problem. Your reaction is the problem." -- Anonymous


Revolutionizing Investigations: The Impact of AI in Digital Forensics

One of the most significant challenges in modern digital forensics, both in the corporate sector and law enforcement, is the abundance of data. Due to increasing digital storage capacities, even mobile devices today can accumulate up to 1TB of information. ... Digital forensics started benefiting from AI features a few years ago. The first major development in this regard was the implementation of neural networks for picture recognition and categorization. This powerful tool has been instrumental for forensic examiners in law enforcement, enabling them to analyze pictures from CCTV and seized devices more efficiently. It significantly accelerated the identification of persons of interest and child abuse victims as well as the detection of case-related content, such as firearms or pornography. ... No matter how advanced, AI operates within the boundaries of its training, which can sometimes be incomplete or imperfect. Large language models, in particular, may produce inaccurate information if their training data lacks sufficient detail on a given topic. As a result, investigations involving AI technologies require human oversight. In DFIR, validating discovered evidence is standard practice. It is common to use multiple digital forensics tools to verify extracted data and manually check critical details in source files. 


Is banning ransomware payments key to fighting cybercrime?

Implementing a payment ban is not without challenges. In the short term, retaliatory attacks are a real possibility as cybercriminals attempt to undermine the policy. However, given the prevalence of targets worldwide, I believe most criminal gangs will simply focus their efforts elsewhere. The government’s resolve would certainly be tested if payment of a ransom was seen as the only way to avoid public health data being leaked, energy networks being crippled, or preventing a CNI organization from going out of business. In such cases, clear guidelines as well as technical and financial support mechanisms for affected organizations are essential. Policy makers must develop playbooks for such scenarios and run education campaigns that raise awareness about the policy’s goals, emphasizing the long-term benefits of standing firm against ransom demands. That said, increased resilience—both technological and organizational—are integral to any strategy. Enhanced cybersecurity measures are critical, in particular a zero trust strategy that reduces an organization’s attack surface and stops hackers from being able to move laterally in the network. The U.S. federal government has already committed to move to zero trust architectures.


Building a Data-Driven Culture: Four Key Elements

Why is building a data-driven culture incredibly hard? Because it calls for a behavioral change across the organization. This work is neither easy nor quick. To better appreciate the scope of this challenge, let’s do a brief thought exercise. Take a moment to reflect on these questions: How involved are your leaders in championing and directly following through on data-driven initiatives? Do you know whether your internal stakeholders are all equipped and empowered to use data for all kinds of decisions, strategic or tactical? Does your work environment make it easy for people to come together, collaborate with data, and support one another when they’re making decisions based on the insights? Does everyone in the organization truly understand the benefits of using data, and are success stories regularly shared internally to inspire people to action? If your answers to these questions are “I’m not sure” or “maybe,” you’re not alone. Most leaders assume in good faith that their organizations are on the right path. But they struggle when asked for concrete examples or data-backed evidence to support these gut-feeling assumptions. The leaders’ dilemma becomes even more clear when you consider that the elements at the core of the four questions above — leadership intervention, data empowerment, collaboration, and value realization — are inherently qualitative. Most organizational metrics or operational KPIs don’t capture them today. 


How CIOs Should Prepare for Product-Led Paradigm Shift

Scaling product centricity in an organization is like walking a tightrope. Leaders must drive change while maintaining smooth operations. This requires forming cross-functional teams, outcome-based evaluation and navigating multiple operating models. As a CIO, balancing change while facing the internal resistance of a risk-averse, siloed business culture can feel like facing a strong wind on a high wire. ...The key to overcoming this is to demonstrate the benefits of a product-centric approach incrementally, proving its value until it becomes the norm. To prevent cultural resistance from derailing your vision for a more agile enterprise, leverage multiple IT operating models with a service or value orientation to meet the ambitious expectations of CEOs and boards. Engage the C-suite by taking a holistic view of how democratized IT can be used to meet stakeholder expectations. Every organization has a business and enterprise operating model to create and deliver value. A business model might focus on manufacturing products that delight customers, requiring the IT operating model to align with enterprise expectations. This alignment involves deciding whether IT will merely provide enabling services or actively partner in delivering external products and services.


CISOs gain greater influence in corporate boardrooms

"As the role of the CISO grows more complex and critical to organisations, CISOs must be able to balance security needs with business goals, culture, and articulate the value of security investments." She highlights the importance of strong relationships across departments and stakeholders in bolstering cybersecurity and privacy programmes. The study further discusses the positive impact of having board members with a cybersecurity background. These members foster stronger relationships with security teams and have more confidence in their organisation's security stance. For instance, boards with a CISO member report higher effectiveness in setting strategic cybersecurity goals and communicating progress, compared to boards without such expertise. CISOs with robust board relationships report improved collaboration with IT operations and engineering, allowing them to explore advanced technologies like generative AI for enhanced threat detection and response. However, gaps persist in priority alignment between CISOs and boards, particularly around emerging technologies, upskilling, and revenue growth. Expectations for CISOs to develop leadership skills add complexity to their role, with many recognising a gap in business acumen, emotional intelligence, and communication. 


Researchers claim Linux kernel tweak could reduce data center energy use by 30%

Researchers at the University of Waterloo's Cheriton School of Computer Science, led by Professor Martin Karsten and including Peter Cai, identified inefficiencies in network traffic processing for communications-heavy server applications. Their solution, which involves rearranging operations within the Linux networking stack, has shown improvements in both performance and energy efficiency. The modification, presented at an industry conference, increases throughput by up to 45 percent in certain situations without compromising tail latency. Professor Karsten likened the improvement to optimizing a manufacturing plant's pipeline, resulting in more efficient use of data center CPU caches. Professor Karsten collaborated with Joe Damato, a distinguished engineer at Fastly, to develop a non-intrusive kernel change consisting of just 30 lines of code. This small but impactful modification has the potential to reduce energy consumption in critical data center operations by as much as 30 percent. Central to this innovation is a feature called IRQ (interrupt request) suspension, which balances CPU power usage with efficient data processing. By reducing unnecessary CPU interruptions during high-traffic periods, the feature enhances network performance while maintaining low latency during quieter times.


GitHub Desktop Vulnerability Risks Credential Leaks via Malicious Remote URLs

While the credential helper is designed to return a message containing the credentials that are separated by the newline control character ("\n"), the research found that GitHub Desktop is susceptible to a case of carriage return ("\r") smuggling whereby injecting the character into a crafted URL can leak the credentials to an attacker-controlled host. "Using a maliciously crafted URL it's possible to cause the credential request coming from Git to be misinterpreted by Github Desktop such that it will send credentials for a different host than the host that Git is currently communicating with thereby allowing for secret exfiltration," GitHub said in an advisory. A similar weakness has also been identified in the Git Credential Manager NuGet package, allowing for credentials to be exposed to an unrelated host. ... "While both enterprise-related variables are not common, the CODESPACES environment variable is always set to true when running on GitHub Codespaces," Ry0taK said. "So, cloning a malicious repository on GitHub Codespaces using GitHub CLI will always leak the access token to the attacker's hosts." ... In response to the disclosures, the credential leakage stemming from carriage return smuggling has been treated by the Git project as a standalone vulnerability (CVE-2024-52006, CVSS score: 2.1) and addressed in version v2.48.1.


The No-Code Dream: How to Build Solutions Your Customer Truly Needs

What's excellent about no-code is that you can build a platform that won't require your customers to be development professionals — but will allow customization. That's the best approach: create a blank canvas for people, and they will take it from there. Whether it's surveys, invoices, employee records, or something completely different, developers have the tools to make it visually appealing to your customers, making it more intuitive for them. I also want to break the myth that no code doesn't allow effective data management. It is possible to create a no-code platform that will empower users to perform complex mathematical operations seamlessly and to support managing interrelated data. This means users' applications will be more robust than their competitors and produce more meaningful insights. ... As a developer, I am passionate about evolving tech and our industry's challenges. I am also highly aware of people's concerns over the security of many no-code solutions. Security is a critical component of any software; no-code solutions are no exception. One-off custom software builds do not typically undergo the same rigorous security testing as widely used commercial software due to the high cost and time involved. This leaves them vulnerable to security breaches.


Digital Operations at Turning Point as Security and Skills Concerns Mount

The development of appropriate skills and capabilities has emerged as a critical challenge, ranking as a pressing concern in advancing digital operations. The talent shortage is most acute in North America and the media industry, where fierce competition for skilled professionals coincides with accelerating digital transformation initiatives. Organizations face a dual challenge: upskilling existing staff while competing for scarce talent in an increasingly competitive market. The report suggests this skills gap could potentially slow the adoption of new technologies and hamper operational advancement if not adequately addressed. "The rapid evolution of how AI is being applied to many parts of jobs to be done is unmatched," Armandpour said. "Raising awareness, educating, and fostering a rich learning environment for all employees is essential." ... "Service outages today can have a much greater impact due to the interdependencies of modern IT architectures, so security is especially critical," Armandpour said. "Organizations need to recognize security as a critical business imperative that helps power operational resilience, customer trust, and competitive advantage." What sets successful organizations apart is the prioritization of defining robust security requirements upfront and incorporating security-by-design into product development cycles. 


Is ChatGPT making us stupid?

In fact, one big risk right now is how dependent developers are becoming on LLMs to do their thinking for them. I’ve argued that LLMs help senior developers more than junior developers, precisely because more experienced developers know when an LLM-driven coding assistant is getting things wrong. They use the LLM to speed up development without abdicating responsibility for that development. Junior developers can be more prone to trusting LLM output too much and don’t know when they’re being given good code or bad. Even for experienced engineers, however, there’s a risk of entrusting the LLM to do too much. For example, Mike Loukides of O’Reilly Media went through their learning platform data and found developers show “less interest in learning about programming languages,” perhaps because developers may be too “willing to let AI ‘learn’ the details of languages and libraries for them.” He continues, “If someone is using AI to avoid learning the hard concepts—like solving a problem by dividing it into smaller pieces (like quicksort)—they are shortchanging themselves.” Short-term thinking can yield long-term problems. As noted above, more experienced developers can use LLMs more effectively because of experience. If a developer offloads learning for quick-fix code completion at the long-term cost of understanding their code, that’s a gift that will keep on taking.