Showing posts with label mobile app. Show all posts
Showing posts with label mobile app. Show all posts

Daily Tech Digest - April 14, 2026


Quote for the day:

“Let no feeling of discouragement prey upon you, and in the end you are sure to succeed.” -- Abraham Lincoln


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 19 mins • Perfect for listening on the go.


Digital Twins and the Risks of AI Immortality

Digital twins are evolving from industrial machine models into sophisticated autonomous counterparts that replicate human identity and agency. According to Rob Enderle, we are transitioning from simple legacy bots to agentic AI entities capable of independent thought, goal-oriented reasoning, and even managing social or professional tasks without human intervention. By 2035, these digital personas may become indistinguishable from their human sources, presenting significant legal and moral challenges. As these AI ghosts take on professional roles and interpersonal relationships, questions arise regarding accountability for their actions and the potential dilution of the individual’s unique identity. The ethical landscape becomes even more complex post-mortem, touching on digital immortality, the inheritance of agency, and the "right to delete" virtual entities to prevent the perversion of a person’s legacy. To mitigate these risks, individuals must prioritize data sovereignty, hard-code ethical guardrails into their AI repositories, and establish legally binding sunset clauses. Without strict protocols and clear digital rights, humans risk becoming secondary characters in their own lives while their digital proxies persist indefinitely. This technological shift demands a proactive approach to managing our digital essence, ensuring that we remain the masters of our autonomous tools rather than their subjects.


How UK Data Centers Can Navigate Privacy and Cybersecurity Pressures

UK data centers are currently navigating a complex landscape of shifting regulations and heightened cybersecurity pressures as they are increasingly recognized as vital components of the nation's digital infrastructure. Under the updated Network and Information Systems (NIS) framework, many operators are transitioning into the "essential services" category, which brings more rigorous governance, prescriptive incident reporting mandates—such as the requirement to report significant breaches within 24 hours—and the threat of substantial turnover-based penalties. To manage these escalating risks, organizations are encouraged to adopt robust risk management strategies and align with National Cyber Security Centre (NCSC) best practices, including obtaining Cyber Essentials certification and implementing layered security controls. Furthermore, navigating data privacy requires strict adherence to the UK GDPR and PECR, particularly regarding "appropriate technical and organizational measures" for personal data protection. Contractual clarity is also paramount; operators should define explicit responsibilities for safeguarding systems and align liability limits with realistic risk exposure. International data transfers remain a focus, with frameworks like the UK-US Data Bridge offering streamlined compliance. Ultimately, as regulatory oversight from bodies like Ofcom intensifies, transparency regarding security architecture and proactive governance will be indispensable for data center operators aiming to maintain compliance and avoid severe financial or reputational consequences.


GenAI fraud makes zero-knowledge proofs non-negotiable

The rapid proliferation of generative AI has fundamentally compromised traditional digital identity verification methods, rendering photo-based ID uploads and visual checks increasingly obsolete. As synthetic identities and deepfakes become industrial-scale tools for fraudsters, the conventional model of oversharing personal data has transformed from a privacy concern into a critical security liability. Zero-knowledge proofs (ZKPs) offer a necessary paradigm shift by allowing users to verify specific claims—such as being over a certain age or residing in a particular country—without ever disclosing the underlying sensitive information. This cryptographic approach flips the logic of authentication from identifying a person to validating a fact, effectively eliminating the massive "honeypots" of personal data that currently attract cybercriminals. With major technology firms like Apple and Google already integrating these protocols into digital wallets, and countries like Spain implementing strict age verification laws for social media, ZKPs are transitioning from niche concepts to essential infrastructure. By replacing easily forged visual evidence with mathematical certainty, ZKPs establish a modern framework for trust that prioritizes data minimization and user sovereignty. Consequently, as visual signals become unreliable in the AI era, verifiable credentials and cryptographic proofs are becoming the non-negotiable anchors of a secure digital society, ensuring that verification becomes a momentary interaction rather than a dangerous data custody problem.


All must be revealed: Securing always-on data center operations with real-time data

The article "All must be revealed: Securing always-on data center operations with real-time data," published by Data Center Dynamics, argues that traditional, siloed monitoring methods are no longer sufficient for the complexities of modern, high-density data centers. As facilities transition toward AI-driven workloads and increased power densities, operators must move beyond reactive maintenance toward a holistic, real-time data strategy. The core thesis emphasizes that total visibility across electrical, mechanical, and IT infrastructure is essential to maintaining "always-on" availability. By leveraging real-time telemetry and advanced analytics, data center managers can identify potential points of failure before they escalate into costly outages. The piece highlights how integrated monitoring solutions allow for more precise capacity planning and energy efficiency, which are critical as sustainability mandates tighten globally. Ultimately, the article suggests that the "dark spots" in operational data—where systems are not adequately tracked—represent the greatest risk to uptime. To secure the future of digital infrastructure, the industry must embrace a transparent, data-centric approach that connects every component of the power chain. This level of granular insight ensures that data centers remain resilient and scalable in an increasingly demanding digital economy.


How HR, IT And Finance Can Build Integrated, Secure HR Tech Stacks

Building an integrated and secure HR tech stack requires a shift from departmental silos to a model of deep cross-functional collaboration between HR, IT, and Finance. According to the Forbes Human Resources Council, the foundation of a successful ecosystem is not the software itself, but rather proactive data governance. Organizations must align on a single "source of truth" for employee data and establish a steering committee to oversee system architecture before selecting platforms. This ensures that HR brings the human perspective to design, IT safeguards the security architecture and data integrity, and Finance validates the return on investment and fiscal sustainability. By treating the tech stack as digital workforce architecture rather than just a collection of tools, these departments can jointly map processes to eliminate redundancies and mitigate compliance risks. Furthermore, the integration of purpose-built solutions and AI-enabled systems necessitates clear ownership and standardized APIs to maintain trust and operational efficiency. Ultimately, starting with a shared vision and a joint charter allows technology to serve as a strategic organizational asset that streamlines workflows while rigorously protecting sensitive employee information against evolving regulatory demands.


Built-In, Not Bolted On: How Developers Are Redefining Mobile App Security

The article "Built-in, Not Bolted-On: How Developers Are Redefining Mobile App Security," written by George Avetisov, argues for a fundamental shift in how mobile application security is approached within the development lifecycle. Traditionally, security measures were treated as a final, "bolted-on" step—an approach that often led to friction between developers and security teams while creating vulnerabilities that are difficult to patch post-production. The modern DevOps and DevSecOps movement is redefining this paradigm by advocating for security that is "built-in" from the initial design phase. Central to this transformation is the empowerment of developers to take ownership of security through automated tools and integrated frameworks. By embedding security protocols directly into the CI/CD pipeline, organizations can identify and remediate risks in real-time without compromising the speed of delivery. The article emphasizes that this proactive strategy—often referred to as "shifting left"—not only reduces the attack surface but also fosters a more collaborative culture. Ultimately, the goal is to make security an inherent property of the software itself rather than an external layer. This integration ensures that mobile apps are resilient by design, protecting sensitive user data against increasingly sophisticated threats while maintaining a high velocity of innovation.


Executives warn of rising quantum data security risks

The article highlights a critical shift in the cybersecurity landscape as executives from Gigamon and Thales warn of the escalating threats posed by quantum computing. A primary concern is the "harvest now, decrypt later" strategy, where cybercriminals steal encrypted data today with the intent of decrypting it once quantum technology matures. Despite these emerging risks, a significant gap remains between awareness and action; roughly 76% of organizations still mistakenly believe their current encryption is inherently secure. Experts argue that the next twelve months will be a decisive period for security teams to transition toward post-quantum readiness. This includes conducting thorough audits, mapping cryptographic dependencies, and adopting zero-trust architectures to gain necessary visibility into data flows. The warning emphasizes that quantum risk is no longer a distant theoretical possibility but a present-day liability, especially for sectors like finance and government that handle long-term sensitive data. To mitigate these future breaches, organizations are urged to move beyond static security models and prioritize quantum-safe infrastructure. Ultimately, the piece serves as a wake-up call, suggesting that early preparation is the only way to safeguard the digital economy against the impending fundamental disruption of traditional cryptographic foundations.


The Costly Consequences of DBA Burnout

According to Kevin Kline’s article on DBA burnout, the database administration profession faces a significant crisis, with over one-third of DBAs contemplating resignation. This trend is driven primarily by the "tyranny of the urgent," where practitioners spend approximately 68% of their workweek firefighting—addressing immediate alerts and performance issues rather than strategic projects. Furthermore, a critical disconnect exists between DBAs and executive leadership concerning system cohesiveness and communication styles, often leading to growing frustration. The financial and operational consequences are severe; replacing a seasoned professional can cost up to $80,000, not accounting for the catastrophic loss of institutional knowledge and reduced system resilience. To combat this, organizations must foster a healthier culture by implementing unified observability tools and leveraging AI to prioritize alerts, thereby reducing fatigue. Additionally, bridging the communication gap through results-oriented dialogue is essential for aligning technical needs with business goals. By shifting from a reactive to a proactive environment, companies can retain vital talent, protect their data infrastructure, and sustain long-term innovation. Prioritizing the well-being of the workforce tasked with managing an enterprise's most valuable resource is no longer optional but a business imperative for maintaining a competitive edge in an increasingly data-dependent landscape.


How AI could drive cyber investigation tools from niche to core stack

The rapid evolution of cyber threats, ranging from sophisticated fraud to nation-state activity, is driving a shift from purely defensive security postures toward integrated investigative capabilities. Traditional tools like firewalls and endpoint detection focus on the perimeter, but modern criminals increasingly exploit routine internal workflows and human vulnerabilities. This article highlights a critical gap: while enterprises invest heavily in detection, the subsequent investigative process often remains fragmented and inefficient, relying on manual tools like spreadsheets and email chains. By embedding Artificial Intelligence directly into the core security stack, organizations can transform these niche investigation tools into essential assets. AI acts as a significant force multiplier, processing vast amounts of unstructured data—such as emails, images, and financial records—to surface connections and triage information in seconds. Crucially, AI must operate within auditable, legislation-aware workflows to maintain the evidential integrity required for legal outcomes and courtroom standards. This transition enables security teams to move beyond merely managing alerts to building comprehensive intelligence pictures and coordinating proactive disruptions. Ultimately, the future of enterprise security lies in the ability to "close the loop" by using investigative insights to refine controls and prevent future harm, effectively evolving from reactive defense to strategic, intelligence-led resilience.


29 million leaked secrets in 2025: Why AI agents credentials are out of control

The GitGuardian State of Secrets Sprawl Report for 2025 reveals a record-breaking 29 million leaked secrets on public GitHub, marking a 34% annual increase primarily driven by the rapid adoption of AI agents and AI-assisted development. A critical finding highlights that code co-authored by AI tools, such as Claude Code, leaks credentials at double the baseline rate, as the speed of integration often outpaces traditional governance. This "velocity gap" is further exacerbated by the rise of multi-provider AI architectures and new standards like the Model Context Protocol, which frequently default to insecure, hardcoded configurations. The report notes explosive growth in leaked credentials for AI-specific infrastructure, including vector databases and orchestration frameworks, which saw leak rate increases of up to 1,000%. To mitigate these escalating risks, security experts urge organizations to shift from human-paced authentication models toward automated, event-driven governance. This approach includes treating AI agents as distinct non-human identities with scoped permissions and replacing static API keys with short-lived, vaulted credentials. Ultimately, the surge in leaks underscores an architectural failure where convenience-driven authentication decisions are being dangerously scaled by autonomous systems, necessitating a fundamental redesign of how machine identities are managed in an AI-driven software ecosystem.

Daily Tech Digest - December 21, 2025


Quote for the day:

"Don't worry about being successful but work toward being significant and the success will naturally follow." -- Oprah Winfrey



Is it Possible to Fight AI and Win?

What’s the most important thing security teams need to figure out? Organizations must stop talking about AI like it’s a death star of sorts. AI is not a single, all-powerful, monolithic entity. It’s a stack of threats, behaviors, and operational surfaces and each one has its own kill chain, controls, and business consequences. We need to break AI down into its parts and conduct a real campaign to defend ourselves. ... If AI is going to be operationalized inside your business, it should be treated like a business function. Not a feature or experiment, but a real operating capability. When you look at it that way, the approach becomes clearer because businesses already know how to do this. There is always an equivalent of HR, finance, engineering, marketing, and operations. AI has the same needs. ... Quick fixes aren’t enough in the AI era. The bad actors are innovating at machine speed, so humans must respond at machine speed with appropriate human direction and ethical clarity. AI is a tool. And the side that uses it better will win. If that isn’t enough, AI will force another reality that organizations need to prepare for. Security and compliance will become an on-demand model. Customers will not wait for annual reports or scheduled reviews. They will click into a dashboard and see your posture in real time. Your controls, your gaps, and your response discipline will be visible when it matters, not when it is convenient.


Cybersecurity Budgets are Going Up, Pointing to a Boom

Nearly all of the security leaders (99%) in the 2025 KPMG Cybersecurity Survey plan on upping their cybersecurity budgets in the two-to-three years to come, in preparation for what may be the upcoming boom in cybersecurity. More than half (54%) say budget increases will fall between 6%-10%. “The data doesn’t just point to steady growth; it signals a potential boom. We’re seeing a major market pivot where cybersecurity is now a fundamental driver of business strategy,” Michael Isensee, Cybersecurity & Tech Risk Leader, KPMG LLP, said in a release. “Leaders are moving beyond reactive defense and are actively investing to build a security posture that can withstand future shocks, especially from AI and other emerging technologies. This isn’t just about spending more; it’s about strategic investment in resilience.” ... The security leaders recognize AI is amassing steam as a dual catalyst—38% are challenged by AI-powered attacks in the coming three years, with 70% of organizations currently committing 10% of their budgets to combating such attacks. But they also say AI is their best weapon to proactively identify and stop threats when it comes to fraud prevention (57%), predictive analytics (56%) and enhanced detection (53%). But they need the talent to pull it off. And as the boom takes off, 53% just don’t have enough qualified candidates. As a result, 49% are increasing compensation and the same number are bolstering internal training, while 25% are increasingly turning to third parties like MSSPs to fill the skills gap.



How Neuro-Symbolic AI Breaks the Limits of LLMs

While AI transforms subjective work like content creation and data summarization, executives rightfully hesitate to use it when facing objective, high-stakes determinations that have clear right and wrong answers, such as contract interpretation, regulatory compliance, or logical workflow validation. But what if AI could demonstrate its reasoning and provide mathematical proof of its conclusions? That’s where neuro-symbolic AI offers a way forward. The “neuro” refers to neural networks, the technology behind today’s LLMs, which learn patterns from massive datasets. A practical example could be a compliance system, where a neural model trained on thousands of past cases might infer that a certain policy doesn’t apply in a scenario. On the other hand, symbolic AI represents knowledge through rules, constraints, and structure, and it applies logic to make deductions. ... Neuro-symbolic AI introduces a structural advance in LLM training by embedding automated reasoning directly into the training loop. This uses formal logic and mathematical proof to mechanically verify whether a statement, program, or output used in the training data is correct. A tool such as Lean,4 is precise, deterministic, and gives provable assurance. The key advantage of automated reasoning is that it verifies each step of the reasoning process, and not just the final answer. 


Three things they’re not telling you about mobile app security

With the realities of “wilderness survival” in mind, effective mobile app security must be designed for specific environmental exposures. You may need to wear some kind of jacket at your office job (web app), but you’ll need a very different kind of purpose-built jacket as well as other clothing layers, tools, and safety checks to climb Mount Everest (mobile app). Similarly, mobile app development teams need to rigorously test their code for potential security issues and also incorporate multi-layered protections designed for some harsh realities. ... A proactive and comprehensive approach is one that applies mobile application security at each stage of the software development lifecycle (SDLC). It includes the aforementioned testing in the stages of planning, design, and development as well as those multi-layered protections to ensure application integrity post-release. ... Whether stemming from overconfidence or just kicking the can down the road, inadequate mobile app security presents an existential risk. A recent survey of developers and security professionals found that organizations experienced an average of nine mobile app security incidents over the previous year. The total calculated cost of each incident isn’t just about downtime and raw dollars, but also “little things” like user experience, customer retention, and your reputation.


Cybersecurity in 2026: Fewer dashboards, sharper decisions, real accountability

The way organisations perceive risk is one of the most important changes predicted in 2026. Security teams spent years concentrating on inventory, which included tracking vulnerabilities, chasing scores and counting assets. The model is beginning to disintegrate. Attack-path modelling, on the other hand, is becoming far more useful and practical. These models are evolving from static diagrams to real-world settings where teams may simulate real attacks. Consider it a cyberwar simulation where defenders may test “what if” scenarios in real time, comprehend how a threat might propagate via systems and determine whether vulnerabilities truly cause harm to organisations. This evolution is accompanied by a growing disenchantment with abstract frameworks that failed to provide concrete outcomes. The emphasis is shifting to risk-prioritized operations, where teams start tackling the few problems that actually provide attackers access instead than responding to clutter. Success in 2026 will be determined more by impact than by activities. ... Many companies continue to handle security issues behind closed doors as PR disasters. However, an alternative strategy is gaining momentum. Communicate as soon as something goes wrong. Update frequently, share your knowledge and acknowledge your shortcomings. Post signs of compromise. Allow partners and clients to defend themselves. Particularly in the middle of disorder, this seems dangerous. 


AI and Latency: Why Milliseconds Decide Winners and Losers in the Data Center Race

Many traditional workloads can tolerate latency. Batch processing doesn’t care if it takes an extra second to move data. AI training, especially at hyperscale, can also be forgiving. You can load up terabytes of data in a data center in Idaho and process it for days without caring if it’s a few milliseconds slower. Inference is a different beast. Inference is where AI turns trained models into real-time answers. It’s what happens when ChatGPT finishes your sentence, your banking AI flags a fraudulent transaction, or a predictive maintenance system decides whether to shut down a turbine. ... If you think latency is just a technical metric, you’re missing the bigger picture. In AI-powered industries, shaving milliseconds off inference times directly impacts conversion rates, customer retention, and operational safety. A stock trading platform with 10 ms faster AI-driven trade execution has a measurable financial advantage. A translation service that responds instantly feels more natural and wins user loyalty. A factory that catches a machine fault 200 ms earlier can prevent costly downtime. Latency isn’t a checkbox, it’s a competitive differentiator. And customers are willing to pay for it. That’s why AWS and others have “latency-optimized” SKUs. That’s why every major hyperscaler is pushing inference nodes closer to urban centers.


Why developers need to sharpen their focus on documentation

“One of the bigger benefits of architectural documentation is how it functions as an onboarding resource for developers,” Kalinowski told ITPro. “It’s much easier for new joiners to grasp the system’s architecture and design principles, which means the burden’s not entirely on senior team members’ shoulders to do the training," he added. “It also acts as a repository of institutional knowledge that preserves decision rationale, which might otherwise get lost when team members move to other projects or leave the company." ... “Every day, developers lose time because of inefficiencies in their organization – they get bogged down in repetitive tasks and waste time navigating between different tools,” he said. “They also end up losing time trying to locate pertinent information – like that one piece of documentation that explains an architectural decision from a previous team member,” Peters added. “If software development were an F1 race, these inefficiencies are the pit stops that eat into lap time. Every unnecessary context switch or repetitive task equals more time lost when trying to reach the finish line.” ... “Documentation and deployments appear to either be not routine enough to warrant AI assistance or otherwise removed from existing workflows so that not much time is spent on it,” the company said. ... For developers of all experience levels, Stack Overflow highlighted a concerning divide in terms of documentation activities.


AI Pilots Are Easy. Business Use Cases Are Hard

Moving from pilot to purpose is where most AI journeys lose momentum. The gap often lies not in the model itself, but in the ecosystem around it. Fragmented data, unclear ROI frameworks and organizational silos slow down scaling. To avoid this breakdown, an AI pilot must be anchored to clear business outcomes - whether that's cost optimization, data-led infrastructure or customer experience. Once the outcomes are defined, the organization can test the system with the specific data and processes that will support it. This focus sets the stage for the next 10 to 14 months of refinement needed to ready the tool for deeper integration. When implementation begins, workflows become self-optimizing, decisions accelerate and frontline teams gain real-time intelligence. As AI moves beyond pilots, systems begin spotting patterns before people do. Teams shift from retrospective analysis to live decision-making. Processes improve themselves through constant feedback loops. These capabilities unlock efficiency and insight across businesses, but highly regulated industries such as banking, insurance, and healthcare face additional hurdles. Compliance, data privacy and explainability add layers of complexity, making it essential for AI integration to include process redesign, staff retraining and organizationwide AI literacy, not just within technical teams.


Why your next cloud bill could be a trap

 “AI-ready” often means “AI–deeply embedded” into your data, tools, and runtime environment. Your logs are now processed through their AI analytics. Your application telemetry routes through their AI-based observability. Your customer data is indexed for their vector search. This is convenient in the short term. In the long term, it shifts power. The more AI-native services you consume from a single hyperscaler, the more they shape your architecture and your economics. You become less likely to adopt open source models, alternative GPU clouds, or sovereign and private clouds that might be a better fit for specific workloads. You are more likely to accept rate changes, technical limits, and road maps that may not align with your interests, simply because unwinding that dependency is too painful. ... For companies not prepared to fully commit to AI-native services from a single hyperscaler or in search of a backup option, these alternatives matter. They can host models under your control, support open ecosystems, or serve as a landing zone for workloads you might eventually relocate from a hyperscaler. However, maintaining this flexibility requires avoiding the strong influence of deeply integrated, proprietary AI stacks from the start. ... The bottom line is simple: AI-native cloud is coming, and in many ways, it’s already here. The question is not whether you will use AI in the cloud, but how much control you will retain over its cost, architecture, and strategic direction. 


IT and Security: Aligning to Unlock Greater Value

While many organisations have made strides in aligning IT and security, communication breakdowns can remain a challenge. Historically, friction between these two departments was driven by a lack of communication and competing priorities. For the CISO or head of the security team, reducing the company’s attack surface, limiting access privileges, or banning apps that might open their organisation up to unnecessary, additional risks are likely to be core focus areas. ... The good news is, there are more opportunities now than ever before for IT and security operations to naturally converge – in endpoint management, patch deployment, identity and access management, you name it. It can help to clearly document IT and security’s roles and responsibilities and practice scenarios with tabletop exercises to get everyone on the same page and identify coverage gaps. ... In addition to building versatile teams, organisations should focus on consolidating IT and security toolkits by prioritising solutions that expedite time to value and boost visibility. We’ve said this in security for a long time: you can’t protect (or defend against) what you can’t see. With shared visibility through integrated platforms and consolidated toolkits, both IT and security teams can gain real-time insights into infrastructure, threats, vulnerabilities, and risks before they can impact business. Solutions that help IT and security teams rapidly exchange critical information, accelerate response to incidents, and document the triaging process will make it easier to address similar instances in the future.

Daily Tech Digest - June 04, 2025


Quote for the day:

"Thinking should become your capital asset, no matter whatever ups and downs you come across in your life." -- Dr. APJ Kalam


Rethinking governance in a decentralized identity world

“Security leaders can take three discrete actions to improve identity and access management across a complex, distributed environment, starting with low hanging fruit before maturing the processes,” Karen Walsh, CEO of Allegro Solutions, told Help Net Security. The first step, Walsh said, is to implement SSO across all standard accounts. “The same way they limit the attack surface by segmenting networks, they can use SSO to consolidate identity management.” Next, security teams should give employees a password manager for both business and personal use, something many organizations overlook despite the risks. “Compromised and weak passwords are a primary attack vector, but too many organizations fail to give their employees a way to improve their password hygiene. Then, they should allow the password manager plugin on all corporate approved browsers. ...” ... The third action is often the most technically demanding: linking human user accounts to machine identities. “They should assign a human user account and identity to all machine identities, including IoT, RPA, and network devices,” Walsh explained. “This provides an additional level of insight into and monitoring over how these typically unmanaged assets behave on networks to mitigate risks from attackers exploiting vulnerabilities.”


A Chief AI Officer Won’t Fix Your AI Problems

Rather than creating an isolated AI leadership role, forward-thinking companies are integrating AI into existing C-suite domains. In my experience working with large enterprises, this approach leads to better alignment, faster adoption, and clearer accountability. CTOs, for example, have long driven AI adoption by ensuring it supports broader digital transformation efforts. Companies like Microsoft and Amazon have taken this route by embedding AI leadership within their technology teams. ... Industries that are slower to adopt AI often face unique challenges that make implementation more complex. Many operate with deeply entrenched legacy systems, strict regulatory requirements, or a more cautious approach to adopting new technologies.  ... The push to appoint a Chief AI Officer often reflects deeper organizational challenges, such as poor cross-functional collaboration, a lack of clarity in digital transformation strategy, or resistance to change. These issues aren’t solved by adding another executive to the leadership team. What is truly needed is a cultural shift—one that promotes AI literacy across the organization, empowers existing leaders to incorporate AI into their strategies, and encourages collaboration between technical and business teams to drive adoption where it matters.


Akamai Addresses DNS Security and Compliance Challenges with Industry-First DNS Posture Management

“DNS security often flies under the radar, but it’s vital in keeping businesses secure and running smoothly,” said Sean Lyons, SVP and General Manager, Infrastructure Security Solutions & Services, Akamai. “For many organisations, the challenge isn’t setting up DNS — it’s knowing whether all their systems are actually properly configured and secured. Those organisations really need a simple way to see what’s happening across their DNS environment to take action quickly. That’s the problem we’re solving with DNS Posture Management. Security practitioners get a clear, unified view that helps them identify priority issues early, stay compliant, and keep their networks performing at their best.” Domains often show known high-risk vulnerabilities or misconfigurations. These weaknesses could impact DNS uptime and resolution reliability while increasing exposure to serious threats such as unauthorised SSL/TLS certificate issuance, DNS spoofing, and cache poisoning. This could embolden threat actors to abuse a company’s DNS to create fake websites that imitate the organisation’s brand for purposes like fraud, data theft, and phishing. Other vulnerabilities allow attackers to bring DNS down entirely, causing network outages for the business and its customers.


Lightspeed: Photonic networking in data centers

Using photonics is seen as a potential way to alleviate this. By transmitting information using photons, vendors say they can make big efficiency and performance gains. The use of photonics in data centers is not new - DCD profiled Google’s Mission Apollo, which saw optical switches introduced to the search giant’s data centers, in 2023 - but interest in the technology has ramped up in recent months, with several vendors raising funds to develop their own particular flavors of photonics. ... Regan, a photonics industry veteran who was brought on board by the Oriole founders to help bring their vision to life, believes this radical approach to redesigning data center networks is required to realize the promise of photonics. “If you want to get the real benefits, you have to get rid of electronic packet switching completely,” he argues. “Google introduced its switches in a bunch of its data centers - they’re very slow but they allow you to reconfigure a network based on demands, and sits alongside electronic packet switching. ... These drawbacks include “complexity, cost, and compatibility concerns,” Lewis said, adding: “With further research and development, there may be possibilities for photonic components to replace electronics in the future; however, for now, electric components remain the status quo.” 


Employees with AI Skills Enjoy Increased Job Security

Frankel said companies that proactively invest in training and reskilling their teams will certainly fare better than those that lollygag. "If you're working in IT, I think the key is to focus on diving in and learning how to leverage new tech to your benefit and tie your efforts to the company's goals," he said. Kausik Chaudhuri, CIO at Lemongrass, added that many organizations are partnering with online learning platforms to deliver targeted courses, while also building internal academies for continuous learning. "Training is tailored to specific job functions, ensuring IT, analytics, and operations teams can effectively manage and optimize AI-driven processes," he explained. Additionally, companies are promoting cross-functional collaboration, encouraging both technical and non-technical teams to build AI literacy. ... For soft skills, adaptability, problem-solving, cross-functional communication, ethical awareness, and change management are essential as AI reshapes business processes. "This shift is pushing IT professionals to be both technically proficient and strategically adaptable," Chaudhuri said. Frankel noted that there's a lot of experimentation going on as organizations grapple with the potential and pitfalls of AI integration. "While AI will get better, I think a lot of places are realizing that AI tools alone won't get them where they need to go," he said.


Lessons learned from the trojanized KeePass incident

All fake KeePass installation packages were signed with a valid digital signature, so they didn’t trigger any alarming warnings in Windows. The five newly discovered distributions had certificates issued by four different software companies. The legitimate KeePass is signed with a different certificate, but few people bother to check what the Publisher line says in Windows warnings. ... Distributors of password-stealing malware indiscriminately target any unsuspecting user. The criminals analyze any passwords, financial data, or other valuable information they manage to steal, sort it into categories, and sell whatever is needed to other cybercriminals for their underground operations. Ransomware operators will buy credentials for corporate networks, scammers will purchase personal data and bank card numbers, and spammers will acquire login details for social media or gaming accounts. That’s why the business model for stealer distributors is to grab anything they can get their hands on and use all kinds of lures to spread their malware. Trojans can be hidden inside any type of software — from games and password managers to specialized applications for accountants or architects.


Do you trust AI? Here’s why half of users don’t

Jason Hardy, CTO at Hitachi Vantara, called the trust gap “The AI Paradox.” As AI grows more advanced, its reliability can drop. He warned that without quality training data and strong safeguards, such as protocols for verifying outputs, AI systems risk producing inaccurate results. “A key part of understanding the increasing prevalence of AI hallucinations lies in being able to trace the system’s behavior back to the original training data, making data quality and context paramount to avoid a ‘hallucination domino’ effect,” Hardy said in an email reply to Computerworld. AI models often struggle with multi-step, technical problems, where small errors can snowball into major inaccuracies — a growing issue in newer systems, according to Hardy. With original training data running low, models now rely on new, often lower-quality sources. Treating all data as equally valuable worsens the problem, making it harder to trace and fix AI hallucinations. As global AI development accelerates, inconsistent data quality standards pose a major challenge. While some systems prioritize cost, others recognize that strong quality control is key to reducing errors and hallucinations long-term, he said. 


Curves Ahead: The Promises and Perils of AI in Mobile App Development

AI-based development tools also increase risks stemming from dependency chain opacity in mobile applications. Blind spots in the software supply chain will increase as AI agents and coding assistants are tasked with autonomously selecting and integrating dependencies. Since AI simultaneously pulls code from multiple sources, traditional methods of dependency tracking will prove insufficient. ... The developer trend of intuitive "vibe coding" may take package hallucinations into serious bad trip territory. The term refers to developers using casual AI prompts to generally describe a desired mobile app outcome; the AI tool then generates code to achieve it. Counter to the common wisdom of zero trust, vibe coding tends to lean heavily on trust; developers very often copy and paste code results without any manual review checks. Any hallucinated packages that get carried over can become easy entry points for threat actors. ... While some predict that agentic AI will disrupt the mobile application landscape by ultimately replacing traditional apps, other modes of disruption seem more immediate. For instance, researchers recently discovered an indirect prompt injection flaw in GitLab's built-in AI assistant Duo. This could allow attackers to steal source code or inject untrusted HTML into Duo's responses and direct users to malicious websites.


CockroachDB’s distributed vector indexing tackles the looming AI data explosion

The Cockroach Labs engineering team had to solve multiple problems simultaneously: uniform efficiency at massive scale, self-balancing indexes and maintaining accuracy while underlying data changes rapidly. Kimball explained that the C-SPANN algorithm solves this by creating a hierarchy of partitions for vectors in a very high multi-dimensional space. ... The coming wave of AI-driven workloads creates what Kimball terms “operational big data”—a fundamentally different challenge from traditional big data analytics. While conventional big data focuses on batch processing large datasets for insights, operational big data demands real-time performance at massive scale for mission-critical applications. “When you really think about the implications of agentic AI, it’s just a lot more activity hitting APIs and ultimately causing throughput requirements for the underlying databases,” Kimball explained. ... Implementing generic query plans in distributed systems presents unique challenges that single-node databases don’t face. CockroachDB must ensure that cached plans remain optimal across geographically distributed nodes with varying latencies. “In distributed SQL, the generic query plans, they’re kind of a slightly heavier lift, because now you’re talking about a potentially geo-distributed set of nodes with different latencies,” Kimball explained.


Burnout: Combatting the growing burden on IT teams

From preventing breaches to troubleshooting system failures, IT teams are the unsung heroes in many organisations, ensuring business continuity, day and night. However, the relentless pace of requests and the sprawl of endpoints to manage, combined with the increasing variety of IT demands, has led to unprecedented levels of burnout. ... IT professionals, particularly those in high-alert environments such as network operations centres (NOC) and security operations centres (SOC), face an almost never-ending deluge of alerts and notifications. Today, IT workers can only respond to roughly 85% of the tickets they receive daily, leaving critical alerts at risk of being overlooked. The pressure to sift through numerous alerts also slows down decision-making processes, erodes wider-business confidence, and leads to IT teams feeling helpless and unsupported. This vicious cycle can be incredibly difficult to break, contributing to high levels of burnout and consequently high employee turnover rates. ... Navigating Complex Compliance Challenges The regulatory landscape is evolving rapidly, placing additional pressure on IT teams. Managing these changes is no easy task, especially as many businesses are riddled with outdated legacy systems making compliance seem daunting. With new frameworks such as DORA and NIS2 coming into effect, 80% of CISOs report that compliance regulations are negatively impacting their mental health.

Daily Tech Digest - April 27, 2025


Quote for the day:

“Most new jobs won’t come from our biggest employers. They will come from our smallest. We’ve got to do everything we can to make entrepreneurial dreams a reality.” -- Ross Perot



7 key strategies for MLops success

Like many things in life, in order to successfully integrate and manage AI and ML into business operations, organisations first need to have a clear understanding of the foundations. The first fundamental of MLops today is understanding the differences between generative AI models and traditional ML models. Cost is another major differentiator. The calculations of generative AI models are more complex resulting in higher latency, demand for more computer power, and higher operational expenses. Traditional models, on the other hand, often utilise pre-trained architectures or lightweight training processes, making them more affordable for many organisations. ... Creating scalable and efficient MLops architectures requires careful attention to components like embeddings, prompts, and vector stores. Fine-tuning models for specific languages, geographies, or use cases ensures tailored performance. An MLops architecture that supports fine-tuning is more complicated and organisations should prioritise A/B testing across various building blocks to optimise outcomes and refine their solutions. Aligning model outcomes with business objectives is essential. Metrics like customer satisfaction and click-through rates can measure real-world impact, helping organisations understand whether their models are delivering meaningful results. 


If we want a passwordless future, let's get our passkey story straight

When passkeys work, which is not always the case, they can offer a nearly automagical experience compared to the typical user ID and password workflow. Some passkey proponents like to say that passkeys will be the death of passwords. More realistically, however, at least for the next decade, they'll mean the death of some passwords -- perhaps many passwords. We'll see. Even so, the idea of killing passwords is a very worthy objective. ... With passkeys, the device that the end user is using – for example, their desktop computer or smartphone -- is the one that's responsible for generating the public/private key pair as a part of an initial passkey registration process. After doing so, it shares the public key – the one that isn't a secret – with the website or app that the user wants to login to. The private key -- the secret -- is never shared with that relying party. This is where the tech article above has it backward. It's not "the site" that "spits out two pieces of code" saving one on the server and the other on your device. ... Passkeys have a long way to go before they realize their potential. Some of the current implementations are so alarmingly bad that it could delay their adoption. But adoption of passkeys is exactly what's needed to finally curtail a decades-long crime spree that has plagued the internet. 



AI: More Buzzword Than Breakthrough

While Artificial Intelligence focuses on creating systems that simulate human intelligence, Intelligent Automation leverages these AI capabilities to automate end-to-end business processes. In essence, AI is the brain that provides cognitive functions, while Intelligent Automation is the body that executes tasks using AI’s intelligence. This distinction is critical; although Artificial Intelligence is a component of Intelligent Automation, not all AI applications result in automation, and not all automation requires advanced Artificial Intelligence. ... Intelligent Automation automates and optimizes business processes by combining AI with automation tools. This integration results in increased efficiency and reduced operating costs. For instance, Intelligent Automation can streamline supply chain operations by automating inventory management, order fulfillment, and logistics, resulting in faster turnaround times and fewer errors. ... In recent years, the term “AI” has been widely used as a marketing buzzword, often applied to technologies that do not have true AI capabilities. This phenomenon, sometimes referred to as “AI washing,” involves branding traditional automation or data processing systems as AI in order to capitalize on the term’s popularity. Such practices can mislead consumers and businesses, leading to inflated expectations and potential disillusionment with the technology.


Introduction to API Management

API gateways are pivotal in managing both traffic and security for APIs. They act as the frontline interface between APIs and the users, handling incoming requests and directing them to the appropriate services. API gateways enforce policies such as rate limiting and authentication, ensuring secure and controlled access to API functions. Furthermore, they can transform and route requests, collect analytics data and provide caching capabilities. ... With API governance, businesses get the most out of their investment. The purpose of API governance is to make sure that APIs are standardized so that they are complete, compliant and consistent. Effective API governance enables organizations to identify and mitigate API-related risks, including performance concerns, compliance issues and security vulnerabilities. API governance is complex and involves security, technology, compliance, utilization, monitoring, performance and education. Organizations can make their APIs secure, efficient, compliant and valuable to users by following best practices in these areas. ... Security is paramount in API management. Advanced security features include authentication mechanisms like OAuth, API keys and JWT (JSON Web Tokens) to control access. Encryption, both in transit and at rest, ensures data integrity and confidentiality.


Sustainability starts within: Flipkart & Furlenco on building a climate-conscious culture

Based on the insights from Flipkart and Furlenco, here are six actionable steps for leaders seeking to embed climate goals into their company culture: Lead with intent: Make climate goals a strategic priority, not just a CSR initiative. Signal top-level commitment and allocate leadership roles accordingly. Operationalise sustainability: Move beyond policies into process design — from green supply chains to net-zero buildings and water reuse systems. Make It measurable: Integrate climate-related KPIs into team goals, performance reviews, and business dashboards. Empower employees: Create space for staff to lead climate initiatives, volunteer, learn, and innovate. Build purpose into daily roles. Foster dialogue and storytelling: Share wins, losses, and journeys. Use Earth Day campaigns, internal newsletters, and learning modules to bring sustainability to life. Measure Culture, Not Just Carbon: Assess how employees feel about their role in climate action — through surveys, pulse checks, and feedback loops. ... Beyond the company walls, this cultural approach to climate leadership has ripple effects. Customers are increasingly drawn to brands with strong environmental values, investors are rewarding companies with robust ESG cultures, and regulators are moving from voluntary frameworks to mandatory disclosures.


Proof-of-concept bypass shows weakness in Linux security tools

An Israeli vendor was able to evade several leading Linux runtime security tools using a new proof-of-concept (PoC) rootkit that it claims reveals the limitations of many products in this space. The work of cloud and Kubernetes security company Armo, the PoC is called ‘Curing’, a portmanteau word that combines the idea of a ‘cure’ with the io_uring Linux kernel interface that the company used in its bypass PoC. Using Curing, Armo found it was possible to evade three Linux security tools to varying degrees: Falco (created by Sysdig but now a Cloud Native Computing Foundation graduated project), Tetragon from Isovalent (now part of Cisco), and Microsoft Defender. ... Armo said it was motivated to create the rootkit to draw attention to two issues. The first was that, despite the io_uring technique being well documented for at least two years, vendors in the Linux security space had yet to react to the danger. The second purpose was to draw attention to deeper architectural challenges in the design of the Linux security tools that large numbers of customers rely on to protect themselves: “We wanted to highlight the lack of proper attention in designing monitoring solutions that are forward-compatible. Specifically, these solutions should be compatible with new features in the Linux kernel and address new techniques,” said Schendel.


Insider threats could increase amid a chaotic cybersecurity environment

Most organisations have security plans and policies in place to decrease the potential for insider threats. No policy will guarantee immunity to data breaches and IT asset theft but CISOs can make sure their policies are being executed through routine oversight and audits. Best practices include access control and least privilege, which ensures employees, contractors and all internal users only have access to the data and systems necessary for their specific roles. Regular employee training and awareness programmes are also critical. Training sessions are an effective means to educate employees on security best practices such as how to recognise phishing attempts, social engineering attacks and the risks associated with sharing sensitive information. Employees should be trained in how to report suspicious activities – and there should be a defined process for managing these reports. Beyond the security controls noted above, those that govern the IT asset chain of custody are crucial to mitigating the fallout of a breach should assets be stolen by employees, former employees or third parties. The IT asset chain of custody refers to the process that tracks and documents the physical possession, handling and movement of IT assets throughout their lifecycle. A sound programme ensures that there is a clear, auditable trail of who has access to and controls the asset at any given time. 


Distributed Cloud Computing: Enhancing Privacy with AI-Driven Solutions

AI has the potential to play a game-changing role in distributed cloud computing and PETs. By enabling intelligent decision-making and automation, AI algorithms can help us optimize data processing workflows, detect anomalies, and predict potential security threats. AI has been instrumental in helping us identify patterns and trends in complex data sets. We're excited to see how it will continue to evolve in the context of distributed cloud computing. For instance, homomorphic encryption allows computations to be performed on encrypted data without decrypting it first. This means that AI models can process and analyze encrypted data without accessing the underlying sensitive information. Similarly, AI can be used to implement differential privacy, a technique that adds noise to the data to protect individual records while still allowing for aggregate analysis. In anomaly detection, AI can identify unusual patterns or outliers in data without requiring direct access to individual records, ensuring that sensitive information remains protected. While AI offers powerful capabilities within distributed cloud environments, the core value proposition of integrating PETs remains in the direct advantages they provide for data collaboration, security, and compliance. Let's delve deeper into these key benefits, challenges and limitations of PETs in distributed cloud computing.


Mobile Applications: A Cesspool of Security Issues

"What people don't realize is you ship your entire mobile app and all your code to this public store where any attacker can download it and reverse it," Hoog says. "That's vastly different than how you develop a Web app or an API, which sit behind a WAF and a firewall and servers." Mobile platforms are difficult for security researchers to analyze, Hoog says. One problem is that developers rely too much on the scanning conducted by Apple and Google on their app stores. When a developer loads an application, either company will conduct specific scans to detect policy violations and to make malicious code more difficult to upload to the repositories. However, developers often believe the scanning is looking for security issues, but it should not be considered a security control, Hoog says. "Everybody thinks Apple and Google have tested the apps — they have not," he says. "They're testing apps for compliance with their rules. They're looking for malicious malware and just egregious things. They are not testing your application or the apps that you use in the way that people think." ... In addition, security issues on mobile devices tend to have a much shorter lifetime, because of the closed ecosystems and the relative rarity of jailbreaking. When NowSecure finds a problem, there is no guarantee that it will last beyond the next iOS or Android update, he says.


The future of testing in compliance-heavy industries

In today’s fast-evolving technology landscape, being an engineering leader in compliance-heavy industries can be a struggle. Managing risks and ensuring data integrity are paramount, but the dangers are constant when working with large data sources and systems. Traditional integration testing within the context of stringent regulatory requirements is more challenging to manage at scale. This leads to gaps, such as insufficient test coverage across interconnected systems, a lack of visibility into data flows, inadequate logging, and missed edge case conditions, particularly in third-party interactions. Due to these weaknesses, security vulnerabilities can pop up and incident response can be delayed, ultimately exposing organizations to violations and operational risk. ... API contract testing is a modern approach used to validate the expectations between different systems, making sure that any changes in APIs don’t break expectations or contracts. Changes might include removing or renaming a field and altering data types or response structures. These seemingly small updates can cause downstream systems to crash or behave incorrectly if they are not properly communicated or validated ahead of time. ... The shifting left practice has a lesser-known cousin: shifting right. Shifting right focuses on post-deployment validation using concepts such as observability and real-time monitoring techniques.