Showing posts with label data ethics. Show all posts
Showing posts with label data ethics. Show all posts

Daily Tech Digest - May 12, 2026


Quote for the day:

"Leadership seems mystical. It's actually methodical. The method is learnable and repeatable — and when followed, produces results that feel magical." --  Gordon Tredgold


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 21 mins • Perfect for listening on the go.


The ghost in the machine: Why AI ROI dies at the human finish line

In "The Ghost in the Machine," Andrew Hallinson argues that the primary barrier to achieving a return on investment for artificial intelligence is not technical inadequacy but human psychological resistance. Despite multi-million dollar investments in advanced data stacks, many organizations suffer from what Hallinson terms an "aversion tax"—the significant loss of potential value caused by low adoption rates and human friction. This resistance stems from three psychological barriers: the "black box paradox," where lack of transparency breeds distrust; "identity threat," where employees feel the technology undermines their professional intuition and autonomy; and the "perfection trap," which involves holding algorithms to much higher standards than human peers. Hallinson illustrates a solution through his experience at ADP, where success was achieved by shifting the focus from restrictive data governance to empowering data democratization. By treating employees as strategic partners and behavioral architects rather than just data processors, leaders can overcome these hurdles. Ultimately, the article posits that technical excellence is wasted if cultural integration is ignored. For executives, the mandate is clear: building an AI-ready culture is just as critical as the engineering itself, as ignoring the human element transforms expensive AI tools into mere "shelfware" that fails to deliver on its mathematical promise.


AI Finds Code Vulnerabilities – Fixing Them Is the Real Challenge

The article "AI Finds Code Vulnerabilities – Fixing Them is the Real Challenge," published on DevOps Digest, explores the double-edged sword of utilizing artificial intelligence in software security. While AI-driven tools have revolutionized the ability to scan vast codebases and identify potential security flaws with unprecedented speed, the author argues that the industry's bottleneck has shifted from detection to remediation. Automated scanners often generate an overwhelming volume of alerts, many of which are false positives or lack the necessary context for immediate action. This "security debt" places a significant burden on development teams who must manually verify and patch each issue. Furthermore, the piece highlights that while AI can identify a problem, it often struggles to understand the complex business logic required to fix it without breaking existing functionality. The real challenge lies in integrating AI into the developer's workflow in a way that provides actionable, verified suggestions rather than just a list of problems. The article concludes that for AI to truly enhance cybersecurity, organizations must focus on automating the "fix" phase through sophisticated generative AI and better developer-security collaboration, ensuring that the speed of remediation finally matches the efficiency of automated detection.


Data Replication Strategies: Enterprise Resilience Guide

The article "Data Replication Strategies: Enterprise Resilience Guide" from Scality explores the critical methodologies for ensuring data durability and availability across physical systems. At its core, the guide highlights the fundamental tradeoff between consistency and availability, a tension that dictates how organizations architect their storage infrastructure. Synchronous replication is presented as the gold standard for zero-data-loss scenarios (RPO of zero) because it requires all replicas to acknowledge a write before completion; however, this introduces significant write latency. Conversely, asynchronous replication optimizes for performance and long-distance fault tolerance by propagating changes in the background, which decouples write speed from network latency but risks losing data not yet synchronized. Beyond timing, the content details architectural models like active-passive, where one primary site handles writes, and active-active, where multiple sites simultaneously serve traffic. The article also addresses consistency models such as strong, causal, and session consistency, emphasizing that the choice depends on specific application requirements. By aligning replication strategies with Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO), the guide argues that organizations can build a resilient infrastructure capable of surviving data center failures while balancing cost, bandwidth, and performance.


When Should a DevOps Agent Act Without Human Approval?

The article titled "When Should a DevOps Agent Act Without Human Approval?" by Bala Priya C. outlines a comprehensive framework for navigating the transition from manual oversight to autonomous operations in DevOps. Central to this transition is a six-point autonomy spectrum, ranging from basic observation at Level 0 to full autonomy at Level 5. The author highlights that determining the appropriate level of independence for an agent depends on four critical factors: the reversibility of the action, the potential blast radius, the quality of incoming signals, and time sensitivity. For most organizations, the author suggests maintaining agents within Levels 1 through 3, where humans remain primary decision-makers or provide explicit approval for suggested actions. Level 4, which involves agents executing tasks and then notifying humans with a defined override window, should be reserved for narrowly defined, low-risk activities. Full Level 5 autonomy is only recommended after an agent has established a consistent, documented track record of success at lower levels. To manage these shifts safely, the article emphasizes the necessity of robust guardrails, including progressive rollouts, granular approval gates, and high signal-quality thresholds. This structured approach ensures that automation enhances operational efficiency without compromising the security or stability of the production environment, ultimately allowing engineers to focus on higher-value strategic innovation and developmental work.


8 guiding principles for reskilling the SOC for agentic AI

The article "8 guiding principles for reskilling the SOC for agentic AI" outlines a strategic roadmap for Security Operations Centers (SOCs) transitioning toward an AI-driven future. The first principle, embracing the agentic imperative, highlights that moving at "machine speed" is essential to counter advanced adversaries effectively. Leadership plays a critical role by setting a tone of rapid experimentation and "failing fast" to foster internal innovation. While cultural resistance—particularly fears regarding job displacement—is common, the article suggests addressing this by redefining roles around high-value tasks such as AI safety and governance. Hands-on training in secure sandboxes is vital for building practitioner confidence and "model intuition," allowing analysts to recognize when AI outputs are structurally flawed. Crucially, the "human-in-the-loop" principle ensures that non-deterministic AI remains under human oversight through clear escalation paths and audit trails. Beyond technology, the shift requires rethinking organizational structures to move from siloed disciplines to holistic, outcome-based orchestration. Ultimately, fostering collaboration between humans and machines allows analysts to relocate from "inside the process" to a supervisory position above it. By reimagining the operating model, CISOs can transform chaotic environments into calm, efficient hubs where agentic AI handles automated triage while humans provide strategic judgment and effective long-term accountability.


New DORA Report Claims Strong Engineering Foundations Drive AI RoI

The May 2026 InfoQ article summarizes Google Cloud's DORA report, "ROI of AI-Assisted Software Development," which offers a structured framework for calculating financial returns from AI adoption. The research argues that AI acts primarily as an amplifier; rather than repairing flawed processes, it magnifies existing organizational strengths and weaknesses. Consequently, achieving sustainable ROI necessitates robust engineering foundations, including quality internal platforms, disciplined version control, and clear workflows. A central concept introduced is the "J-Curve of value realization," where organizations typically face a temporary productivity dip due to the "tuition cost of transformation"—incorporating learning curves, verification taxes for AI-generated code, and essential process adaptations. Despite this initial drop, the report models a substantial first-year ROI of 39% for a typical 500-person organization, with a payback period of approximately eight months. However, leaders are cautioned against an "instability tax," as increased delivery speed may overwhelm manual review gates and elevate failure rates if not balanced with automated testing and continuous integration. Looking ahead, the research predicts compounding gains in years two and three, potentially reaching a 727% return as teams transition toward autonomous agentic workflows. Ultimately, the report emphasizes that AI’s true value lies in clearing systemic bottlenecks and unlocking latent human creativity, rather than pursuing simple headcount reduction.


Compliance Without Chaos In Modern Delivery

The article "Compliance Without Chaos In Modern Delivery" emphasizes transforming compliance from a disruptive, quarterly hurdle into a seamless, integrated component of the software delivery lifecycle. Rather than treating audits as high-stakes oral exams, the author advocates for building automated controls directly into existing engineering workflows. This "Policy as Code" approach effectively eliminates the ambiguity of "folklore" policies by enforcing rules through CI/CD gates, such as mandatory pull request reviews, automated testing, and artifact traceability. To maintain a state of continuous readiness, teams should implement automated evidence collection, ensuring that audit trails for changes, access, and security checks are generated as a natural byproduct of daily development work. The piece also highlights the importance of robust access management, favoring short-lived privileges and group-based permissions over static, high-risk credentials. Furthermore, continuous monitoring is described as essential for identifying silent failures in critical areas like encryption, log retention, and vulnerability status before they escalate into major incidents. By maintaining an updated evidence map and an "audit-ready pack" year-round, organizations can achieve a "boring" compliance posture. Ultimately, the goal is to shift from reactive manual efforts to a disciplined, automated machine that consistently proves security and regulatory adherence without sacrificing delivery speed or engineering focus.


Ask a Data Ethicist: What Are the Legal and Ethical Issues in Summarizing Text with an AI Tool?

The use of AI tools for text summarization introduces significant legal and ethical challenges that organizations must navigate carefully. Legally, the primary concern revolves around copyright infringement, as these tools are often trained on large datasets containing proprietary data without explicit consent, potentially leading to complex intellectual property disputes. Furthermore, privacy risks emerge when users input sensitive or personally identifiable information into external AI systems, potentially violating strict regulations like the GDPR or CCPA. From an ethical standpoint, the article highlights the danger of algorithmic bias, where AI might inadvertently emphasize or distort certain viewpoints based on inherent flaws in its training data. Hallucinations represent another critical ethical risk, as AI can generate plausible-looking but factually incorrect summaries, leading to the spread of misinformation. To mitigate these systemic issues, the author emphasizes the importance of implementing robust data governance frameworks and maintaining a consistent "human-in-the-loop" approach. This ensures that summaries are rigorously reviewed for accuracy and fairness before being utilized in professional decision-making processes. Transparency regarding the use of automated tools is also paramount to maintaining public and stakeholder trust. Ultimately, while AI summarization offers immense efficiency, its deployment requires a balanced strategy that prioritizes legal compliance and ethical integrity.


UK chief executives make AI priority but delay plans

A recent report from Dataiku, based on a Harris Poll survey of nine hundred global chief executives, indicates that UK leaders are positioning artificial intelligence as a paramount corporate priority while simultaneously exercising significant caution in its implementation. The study, which focused on organizations with annual revenues exceeding five hundred million dollars, revealed that eighty-one percent of UK CEOs rank AI strategy as a top or high priority, a figure that notably surpasses the global average of seventy-three percent. However, this high level of ambition is tempered by a growing fear of financial waste; seventy-seven percent of British respondents expressed greater concern about over-investing in the technology than under-investing, compared to sixty-five percent of their international peers. This fiscal wariness has led to tangible delays in project rollouts across the country. Specifically, fifty-one percent of UK executives admitted to postponing AI initiatives due to regulatory uncertainty, a sharp increase from twenty-six percent just one year prior. As questions regarding return on investment and governance persist, a widening gap has emerged between boardroom aspirations and practical execution. UK leaders are increasingly weighing their expenditures more carefully, shifting from rapid adoption toward a more calculated approach that prioritizes oversight and navigates the evolving legislative landscape to avoid costly mistakes.


Open Innovation and AI will define the next generation of manufacturing: Annika Olme, CTO, SKF

Annika Olme, the CTO of SKF, emphasizes that the future of manufacturing lies at the intersection of open innovation and advanced technology like Artificial Intelligence. She highlights how SKF is transitioning from being a traditional bearing manufacturer to a digital-first, data-driven leader. By fostering a culture of deep collaboration with startups, academia, and technology partners, the company accelerates the development of smart solutions that optimize industrial processes globally. AI and machine learning are central to this evolution, particularly in predictive maintenance, which allows customers to anticipate failures and reduce downtime significantly. Olme also underscores the critical role of sustainability, noting that digital transformation is intrinsically linked to circularity and energy efficiency. By leveraging sensors and real-time data analysis, SKF helps various industries minimize waste and lower their carbon footprint. The “Smart Factory” vision involves integrating these technologies into every stage of the product lifecycle, from design to end-of-use recycling. Ultimately, the goal is to create a seamless synergy between human ingenuity and machine intelligence, ensuring that manufacturing remains both competitive and environmentally responsible. This holistic approach to innovation not only boosts productivity but also redefines how global industrial leaders address modern challenges like climate change, resource scarcity, and supply chain volatility.

Daily Tech Digest - October 13, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


Is vibe coding ruining a generation of engineers?

In the era of AI, the traditional journey to coding expertise that has long supported senior developers may be at risk. Easy access to large language models (LLMs) enables junior coders to quickly identify issues in code. While this speeds up software development, it can distance developers from their own work, delaying the growth of core problem-solving skills. As a result, they may avoid the focused, sometimes uncomfortable hours required to build expertise and progress on the path to becoming successful senior developers. ... The increasing availability of these tools from Anthropic, Microsoft and others may reduce opportunities for coders to refine and deepen their skills. Rather than “banging their heads against the wall” to debug a few lines or select a library to unlock new features, junior developers may simply turn to AI for an assist. This means senior coders with problem-solving skills honed over decades may become an endangered species. ... While concerns about AI diminishing human developer skills are valid, businesses shouldn’t dismiss AI-supported coding. They just need to think carefully about when and how to deploy AI tools in development. These tools can be more than productivity boosters; they can act as interactive mentors, guiding coders in real time with explanations, alternatives and best practices.


How Reassured Are You by Your Cloud Compliance?

For organizations, the assurance of a secure cloud hinges on proficient NHI management. By implementing a strategic plan, companies can significantly bolster their defenses against unauthorized access and potential threats. Understanding and managing machine identities becomes a crucial pillar of cloud assurance strategies. ... With organizations strive to maintain their competitive edge, the strategic importance of NHIs in ensuring compliance and security cannot be overstated. By fostering a culture of security awareness and leveraging robust management platforms, businesses can confidently navigate the complex terrain of cloud compliance. ... Compliance is a formidable challenge. However, NHI management offers actionable solutions to these challenges. By auditing and tracking NHIs, organizations gain unparalleled visibility into access patterns and potential breaches, ensuring adherence to relevant regulatory frameworks across multiple sectors. Automation of audit trails and enforcement of policies can significantly reduce the burden on compliance teams, allowing companies to focus on strategic areas of business development. Additionally, adaptive NHI management systems can be scaled and updated to align with new compliance standards. This flexibility positions businesses to react quickly to regulatory changes without incurring significant downtime or resource allocation shifts.


AI Powered SOC: The Shift from Reactive to Resilient

Current SOC operations are described as “buried — not just in alert volume, but in disconnected tools, fragmented telemetry, expanding cloud workloads, and siloed data.” This paints a picture of overwhelmed teams struggling to maintain control in an increasingly complex threat landscape. ... With AI Agents, automated response actions, such as containment and remediation, can be executed with human oversight for high-impact situations. AI can handle routine containment and remediation tasks, such as isolating a compromised host or blocking a malicious hash. After an action is taken, the AI can perform validation checks to ensure business operations are not negatively impacted, with automatic rollback triggers if necessary. ... This transition is not a flip of a switch; it is a strategic journey. The organizations that succeed will be those who invest in integrating AI with existing security ecosystems, upskill their talent to work with these new technologies, and ensure robust governance is in place. Embracing an AI-powered SOC is no longer optional but a strategic imperative. By building a partnership between human expertise and machine efficiency, organizations will transform their security operations from a vulnerable cost center into a resilient and agile business enabler. AI is not a silver bullet—but it’s a strategic lever. The SOC of the future won’t just detect threats; it will predict, prevent, and persist. Shifting to resilience means embracing AI not as a tool, but as a partner in defending digital trust.


Cybersecurity As A Strategy: The CIO’s Playbook for a Perma-Threat Landscape

When cybersecurity is seen as a strategic function, it helps businesses stay strong. It protects intellectual property, makes sure that rules are followed, and builds the trust of customers, partners, and other stakeholders. It can also help businesses be more innovative by letting them look into new markets, use new technologies, and change how they do business with confidence. The main point of this playbook is simple: CIOs need to stop using reactive defense models and start seeing cybersecurity as a key part of their business strategy. In a world where threats are always present, the companies that do well will be the ones whose leaders see cyber resilience as important for brand reputation, business continuity, and staying ahead of the competition. ... In this situation, being reactive is not only dangerous, it’s also costly. The costs of a cyberattack go well beyond fixing the damage right away. Companies can be fined by the government, sued, lose money when their systems go down, and have to pay more for insurance. The reputational damage can be even more devastating: loss of customer trust, decreased investor confidence, and long-term brand erosion. According to studies in the field, the average cost of a data breach is now over a million dollars, and high-profile cases have cost hundreds of millions. ... CIOs need to stop thinking about “building walls and patching holes” and start thinking about how to find, stop, and neutralize threats before they can do any damage. 


What to look for in a data protection platform for hybrid clouds

Data protection is a broad category that includes data security but also encompasses backup and disaster recovery, safe data storage, business continuity and resilience, and compliance with data privacy regulations. ... In the public cloud model, the hyperscalers (such as Amazon Web Services, Google Cloud, and Microsoft Azure) are responsible for protecting their own infrastructure, but the enterprise using them — you — is responsible for properly configuring and managing its own data in the cloud. One of the most common causes of cloud-based data breaches is a simple misconfiguration of an Amazon S3 storage bucket. Cloud security posture management (CSPM) tools can help identify misconfigurations, among other risks. ... Data protection can be performed with on-premises appliances or in the cloud. And organizations can manage their data protection functionality themselves or turn to a managed service. The trend lines are clear: Just as applications and data are moving to the cloud, data protection is moving to the cloud as well, due to the scalability, flexibility, and accessibility that the cloud provides. ... Because every enterprise is different and because hybrid clouds are both complex and varied in their handling of data, you need to get a clear grasp on your specific needs, capabilities, and resources before engaging prospective vendors and then choosing specific solutions for data protection.


Git Services Need Better Security. Here’s How End-to-End Encryption Could Help

Most development teams rely on platforms like GitHub, GitLab, or Bitbucket to manage their projects and collaborate across teams. These services work well for version control and collaboration, but there’s a problem. System breaches have become common, and the data stored in repositories can be highly valuable to attackers. Think about what’s in your repositories. Source code, API keys, infrastructure configurations, and the complete history of your project’s development. If someone gains unauthorized access to your Git service provider’s systems, they can access all of that. Current solutions don’t effectively address this problem. Some open-source projects have attempted to add encryption to Git workflows, but they suffer from two major issues: weak security guarantees and poor performance. The overhead is so large that most teams won’t adopt them. ... End-to-end encryption for Git services would mean that even if your service provider’s systems are compromised, your code remains secure. The provider wouldn’t have the keys to decrypt your repositories. This level of security has become standard for messaging apps and cloud storage. It makes sense to apply the same principles to Git services, especially given the value of what’s stored there. For regulated industries, this could help meet compliance requirements. For any organization with valuable intellectual property, it adds an important layer of protection.


Bringing authentication into the AI century

Today’s customer journey flows much differently than before, spreading across devices, shaped by automation, and powered by artificial intelligence (AI) assistants. What worked five years or even one year ago might already be standing in the way of creating impactful experiences. ... Authentication flows that anticipate outdated behavior and patterns, like expecting static sessions and manual inputs, aren’t able to keep up with the new normal of digital commerce. Patterns that used to look suspicious, including ultra-fast clicks and cross-device shopping, might be totally legitimate. However, if legacy systems can’t tell the difference, the experience of real customers will suffer. They might get flagged as fraud and experience friction, ultimately ending in a negative experience and a lost sale. Furthermore, you must choose the right authentication method in accordance with specific fraud MOs to avoid letting fraud slip through the cracks. ... Leaders don’t need to choose between protecting their business and giving customers the smooth experience they expect. Modern authentication must be built on trust, timing, and intelligence, rather than interruptions. ... Authentication needs to be just as dynamic as today’s fraudsters. It’s not about adding more steps; it’s about smarter context, stronger signals, and systems that can keep up. When trust drives your flow, authentication works seamlessly in the background, keeping real customers loyal and real risks out.


From Automation to Autonomy: Agentic AI set to transform India’s telecom sector

KPMG’s report introduces the Agentic AI Stack for Indian Telcos, a six-layer model covering customer experience, network intelligence, orchestration, data integration, and governance, designed to guide operators from traditional networks toward intelligent, autonomous systems. Current adoption trends show that half of telecom companies have implemented their first GenAI use case, and business leaders are planning to invest USD 25 million in new tech talent and USD 24 million in customer experience initiatives over the next 12 months. Looking ahead, KPMG recommends that telecom operators scale AI pilots to enterprise-wide deployments with AI-ready infrastructure and skilled teams, while policymakers should create agile regulations and governance frameworks to enable safe and responsible AI innovation. Collaboration among startups, academia, and industry partners is critical to building an inclusive and intelligent telecom ecosystem. “Agentic AI is more than a technological advancement — it is a strategic paradigm shift that empowers telecom operators to move from reactive to autonomous systems,” said Akhilesh Tuteja, Partner & National Leader – Technology, Media and Telecommunications (TMT), KPMG in India. “This transformation will unlock new levels of operational efficiency, customer personalization, and revenue growth. India’s unparalleled scale, data richness, and innovation ecosystem uniquely position it to lead the global telecom AI revolution.”


TRIAL: Charting the Path from SCREAM to AARAM – A Simplified Guide for Effective Enterprise Architecture

Despite billions invested annually in enterprise architecture (EA), organizations grapple with a persistent gap between theoretical frameworks and practical execution. In 2025, 94% of CIOs deem EA “absolutely critical” for embedding sustainability and driving digital resilience, yet 57% of architects report feeling underutilized in strategic initiatives. ... At its core, architecture is about effectively managing the lifecycle changes of architecture components and their relationships. TRIAL establishes an EA approach that resonates with architects and stakeholders by embracing these lifecycle stages as central motifs. This approach captures and builds a data and AI-driven architecture around its underlying evolving repository continuum, leveraging the same engagement model for collaborative execution aligned with organizational objectives. ... Enterprise architecture maturity traditionally requires skilled resources, extensive knowledge, and significant time investment. Organizations face resource scarcity while architects average only 18-24 months tenure, making adaptive architecture management nearly impossible. This challenge is exacerbated by broader technology trends, where 70-85% of enterprise AI projects fail due to poor data management, misalignment with business goals, and architectural oversights—rates double those of non-AI IT projects. TRIAL addresses this through progressive maturity states that build upon each other. Organizations advance through clearly defined maturity levels—from Balanced (foundation) through Yearly (planning),


Ask a Data Ethicist: Is It Wrong to Digitally “Resurrect” Someone?

There was even a situation recently which saw the recreation of a murdered person deliver an AI impact statement in court – literally speaking from beyond the grave. This marked a legal first and raised a lot of controversy over whether this was a type of emotional manipulation or an reasonable opportunity to give the victim a voice. It’s clear though, that the door is not open for others to do this, raising more of these questions, particularly as the tools to make this type of AI are now widely available. ... Data privacy laws afforded a level of protection when it comes to our personal data. However, personal data is not personal data if you are no longer alive. That is to say, data protection laws don’t extend to the deceased. The laws exist to protection living individuals. ... It’s a complex question with no “one size fits all” response. The answer might depend on several factors including: Their wishes as outlined in their will; The wishes of their family and estate; How they will be represented in this new digital form; Who controls the digital entity; and Who might be compensated or stand to gain from the digital entity. Increasingly, all of us might want to plan for our digital afterlife, including whether or not we want a digital afterlife. Having conversations with loved ones now about their wishes for their data and other digital assets, including what should or should not be done with these when they are gone, can provide clear guidance for making an ethical choice with respect to the question of digital resurrection.

Daily Tech Digest - September 16, 2025


Quote for the day:

“Too many of us are not living our dreams because we are living our fears.” -- Les Brown



Your employees are feeling ‘OK’ – and that’s a serious problem

At first glance, OK doesn’t sound dangerous. Teams aren’t unhappy enough to trigger alarms, nor are they burning out; they keep delivering at an acceptable level. But ‘acceptable’ is not the same as ‘successful’. Teams stuck in OK lack the energy, creativity and ambition to truly thrive. They’re passable, not powerful – and that complacency can quietly erode performance. ... In fact, the lifetime value of a happy employee is more than twice that of an OK one. This is not soft sentiment – it’s hard economics. By contrast, OK teams bring hidden costs. They are about twice as likely to miss targets as happy teams and have 50% higher staff turnover. They are also less collaborative, less creative and less resilient when challenges arise. ... First, reframe happiness as a serious business metric. It’s not vague or fluffy. It’s measurable, trackable and improvable. It connects directly to performance, retention and, ultimately, profit. Second, focus on the drivers of happiness. I’ve identified five ways to develop happiness at work: connect, be fair, empower, challenge and inspire. ... Third, embed a rhythm of measure-meet-repeat. Measure: Use light-touch weekly pulses and deeper quarterly surveys to gather data; Meet: Bring teams together to discuss results, identify blockers and celebrate progress; and  Repeat: Build momentum with regular reflection and action. This rhythm transforms data into dialogue, which helps organisations to improve.


Are cloud providers neglecting security to chase AI?

an unsettling trend now challenges this narrative. Recent research, including the “State of Cloud and AI Security 2025” report conducted by the Cloud Security Alliance (CSA) in partnership with cybersecurity company Tenable, highlights that cloud security, once considered best in class, is becoming more fragmented and misaligned, leaving organizations vulnerable. The issue isn’t a lack of resources or funding—it’s an alarming shift in priorities by cloud providers. As investment and innovative energies focus more on artificial intelligence and hybrid cloud development, security efforts appear to be falling behind. ... The dangers of this complexity are made worse by what the report calls the weakest link in cloud security: identity and access management (IAM). Nearly 59% of respondents cited insecure identities and risky permissions as their main concerns, with excessive permissions and poor identity hygiene among the top reasons for breaches. ... Deprioritizing security in favor of AI products is a gamble cloud providers appear willing to take, but there are clear signs that enterprises might not follow them down this path forever. The CSA/Tenable report highlights that 31% of surveyed respondents believe their executive leadership fails to grasp the nuances of cloud security, and many have uncritically relied on native tools from cloud vendors without adding extra protections.


The Future of Global A.I.

The accelerating development and adoption of AI products, services and platforms present both challenges and opportunities for regions like the Middle East and North Africa (MENA) and India that have ambitions of integrating AI into their economies. Data presented in the report suggests that the mobile user bases in India and MENA are primed for AI products and services on mobile platforms. For the Middle East, AI is a crucial enabler of economic diversification beyond its hydrocarbon industries, whereas for India, AI can be transformative for its world-leading digital public infrastructure, public service delivery, and digital payments platforms.  ... The BOND report notes that the current wave of AI development and adoption is unprecedented when compared to previous technological waves. It uses OpenAI’s ChatGPT as a benchmark to showcase the explosive growth of user adoption as the platform achieved 1 million users within five days, 800 million weekly active users within 17 months, and registered 90 percent of its users from non-US geographies by its third year. ... In an era of increasing geopolitical competition, countries are supporting efforts to achieve digital sovereignty. The BOND report notes a growing interest in Sovereign AI projects, as demonstrated by NVIDIA’s partnerships in countries like France, Spain, Switzerland, Ecuador, Japan, Vietnam, and Singapore.


Zero Trust Is 15 Years Old — Why Full Adoption Is Worth the Struggle

Effective ZT will not eliminate all breaches – there are simply too many ways into a network – but it would certainly limit the effectiveness of stolen credentials and inhibit lateral movement by intruders, and malicious activity by insiders inside the enterprise network. “Here’s the part most people miss: Zero Trust is just as important for reducing insider risk as it is for keeping out external threats.,” comments Chad Cragle. “Zero Trust is just as important for reducing insider risk as it is for keeping out external threats.” ... Putting people first is good people management and good PR, but bad security. It gives too much leeway to three basic human characteristics: a propensity to trust on sight, a tendency to be lazy, and a deep rooted curiosity. We have a natural tendency to trust first and ask questions later; to skirt security controls when they are too intrusive and hinder our work, and we are naturally curious. ... Technology first is becoming more essential in the emerging world of AI-enhanced deepfakes. We can no longer rely on people being able to recognize people. We are easily fooled into believing this entity is the entity we know and trust. ... Getting the technology ready for ZT is also hard, partly because many applications were not built with ZT in mind. “Many older programs just don’t play nice with modern security,” comments J Stephen Kowski, “so businesses end up stuck between keeping things secure and not slowing down the way they work.”


Crafting an Effective AI Strategy for Your Organization: A Comprehensive Approach

Without a deliberate strategy, AI initiatives might remain small pilot projects that never scale, or they might stray from business needs. A well-crafted AI strategy acts as a compass to guide AI investments and projects. It helps answer critical questions upfront: Which problems are we trying to solve with AI? How do these tie to our business KPIs? Do we have the right data and infrastructure? By addressing these, the strategy ensures AI adoption is purposeful rather than purely experimental. Crucially, the strategy also weaves in ethical and regulatory considerations ... An AI CoE is a dedicated team or organizational unit that centralizes AI expertise and resources to support the entire company’s AI initiatives. Think of it as an in-house “AI SWAT team” that bridges the gap between high-level strategy and the technical execution of AI projects. ... As organizations deploy AI more widely, ethical, legal, and societal responsibilities become non-negotiable. Responsible AI is all about ensuring that systems are fair, transparent, safe, and aligned with human values. ... Many AI models, especially deep learning systems, are often criticized for being “black boxes”—making decisions that are difficult to interpret. Explainable AI (XAI) is about creating methods and tools to make these models transparent and their outputs understandable.


Building security that protects customers, not just auditors

Good engineering usually leads to strong security, and cautions against just going through the motions to meet compliance requirements. ... Sadly, threat actors don’t need to improve, most of the market is very far behind and old-school attacks like phishing still work easily. One trend we’re seeing in the last few years is a strong focus on crypto attacks, and on crypto exchanges. Even these usually involve classic techniques. Another are “SMS abuse” attacks, where attackers exploit endpoints that trigger sending sms messages, which they send to premium numbers they want to bump up. Many such attacks are only discovered when the bill from the SMS provider arrives. ... Current Security Information and Event Management (SIEM) vendors often offer stacks and pricing models that just don’t fit the sheer scale and speed of transactions. Sure, you can make them work, if you spend millions! ... If you just check boxes, you are not protecting your customers, you are just protecting your company from the auditor. Try to understand the rationale behind the control and implement it according to your company’s architecture. Think of it philosophically, would you be happy being a box-ticker or would you prefer to have impact? ... Your goal is to find a way to collaborate with your QSA, they can be true partners for driving positive change in the company. 


Enterprise-Grade Data Ethics: How to Implement Privacy, Policy, and Architecture

Embedding ethics and privacy into daily business operations involves practical, continuous steps integrated deeply into organizational processes. Core recommendations include developing clear and understandable data policies and making them accessible to all stakeholders, regularly training teams to maintain updated awareness of ethical data standards, building privacy considerations directly into system architecture from inception, and collaborating with legal and technical teams on application programming interfaces (APIs) and data models to incorporate explicit privacy rules. ... An enterprise architecture framework creates fundamental support by outlining precise methods for data storage, transfer, and access permissions. Organizations use new and emerging technologies alongside other comprehensive tools to establish systematic policies while implementing strong encryption and data masking approaches for secure data management. ... Executive leaders who dedicate themselves to ethical data handling create profound changes in corporate cultural values. Organizations can demonstrate their strategic dedication to data ethics through executive-level visibility of privacy and ethics system design oversight, combined with employee training investments and performance accountability systems.


CIOs are stressed — and more or less loving it

Not surprisingly AI has upped the ante for stress — or in Richard’s case, concern over the quick adoption of AI tools by end users who may or may not know what to do with them. “I would say that’s probably the thing I worry about the most. I don’t know that it stresses me out,” but he constantly thinks about what tools employees are using and how they are using them. “We don’t want to suck away all the productivity gains by limiting access to great tools, but at the same time, we don’t want to let people run wild with [personally identifiable information] or data” by tools not managed by IT. ... Even with all the pressures on CIOs today and the need to wear many hats, most say the job is still worth it. Pressure, it seems, is not always a bad thing. “I’m still in it, so it must be worth it,’’ Grinnell says. “CIOs have a certain personality; we know you’re not getting into the job and it’ll be smooth sailing. We have to solve a challenge — whatever the challenge is.” … It’s tiring, it’s stressful, but I get up energized every day to go tackle that. That’s who I am.” Driscoll says she likes pressure and finds her role “worth it more now than ever because the job of CIO and CTO has evolved to where the expectation is you will be responsible for the technology, but also be a core partner in where the business is going. For me, that ability to help drive business outcomes, and shape wherever we go as a company makes my job more exciting and worth it.”


How AI and Machine Learning Are Shaping the Fight Against Ransomware

Machine learning algorithms can recognise and understand complex patterns within data sets. Analysing historical information facilitates the identification of behavioural patterns associated with ransomware attacks, enabling strategies to be developed to prevent these attacks in the future. One of the best examples is the use of AI tools that have proven successful in detecting and protecting against cyber threats, including ransomware, by examining and analysing network traffic and user behaviour. ... When it comes to ransomware, speed is everything. As noted by IBM, AI-enabled systems allow organizations to respond to threats 85% faster than traditional methods. This rapid response reduces the damage caused by an attack while also delivering cost savings of unimaginable value to enterprises.  ... Machine learning algorithms are given information about a user’s network activity that is considered normal. Any subsequent actions are deemed abnormal if they involve changes to files and data that are out of the norm for the user. These activities are flagged so that they can be pursued further. This level of automation allows the detection of the presence of ransomware prior to encryption, allowing for timely user intervention. With ransomware pre-encryption detection algorithms, 999 out of 1000 threats can be accurately identified. CrowdStrike also claims to have captured remarkable behaviour-based ransomware detection accuracy.


Navigating the new frontier: Data sovereignty, AI and the role of global infrastructure

Data centers, once mere warehouses of information, are now the backbone of AI-driven economies. In an ever-expanding universe of digital information and content, data center operators are now faced with the daunting task of balancing operational efficiencies against the stringent need for regulatory compliance. As governments worldwide tighten regulations around data residency, cybersecurity, and AI governance, multinational companies face a complex challenge: how to maintain seamless operations while adhering to diverse and often conflicting legal frameworks. ... The integration of programmable infrastructure and cloud-Edge capabilities into cross-border networks and operations further enhances flexibility, allowing customers to localize data processing without duplicating costly physical assets. This hybrid model, underpinned by scalable, region-sensitive architecture, positions compliance as an intrinsic design principle rather than an afterthought. As data sovereignty laws proliferate, governments must support these efforts through fundamental research, clear regulatory frameworks, and partnerships with industry leaders to avoid a fragmented digital landscape that could stifle innovation. ... The convergence of data sovereignty, AI governance, and critical infrastructure security demands a new model of digital governance - one where compliance, innovation, and resilience are seamlessly integrated. 

Daily Tech Digest - August 11, 2025


Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek


Attackers Target the Foundations of Crypto: Smart Contracts

Central to the attack is a malicious smart contract, written in the Solidity programming language, with obfuscated functionality that transfers stolen funds to a hidden externally owned account (EOA), says Alex Delamotte, the senior threat researcher with SentinelOne who wrote the analysis. ... The decentralized finance (DeFi) ecosystem relies on smart contracts — as well as other technologies such as blockchains, oracles, and key management — to execute transactions, manage data on a blockchain, and allow for agreements between different parties and intermediaries. Yet their linchpin status also makes smart contracts a focus of attacks and a key component of fraud. "A single vulnerability in a smart contract can result in the irreversible loss of funds or assets," Shashank says. "In the DeFi space, even minor mistakes can have catastrophic financial consequences. However, the danger doesn’t stop at monetary losses — reputational damage can be equally, if not more, damaging." ... Companies should take stock of all smart contracts by maintaining a detailed and up-to-date record of all deployed smart contracts, verifying every contract, and conducting periodic audits. Real-time monitoring of smart contracts and transactions can detect anomalies and provide fast response to any potential attack, says CredShields' Shashank.


Is AI the end of IT as we know it?

CIOs have always been challenged by the time, skills, and complexities involved in running IT operations. Cloud computing, low-code development platforms, and many DevOps practices helped IT teams move “up stack,” away from the ones and zeros, to higher-level tasks. Now the question is whether AI will free CIOs and IT to focus more on where AI can deliver business value, instead of developing and supporting the underlying technologies. ... Joe Puglisi, growth strategist and fractional CIO at 10xnewco, offered this pragmatic advice: “I think back to the days when you wrote an assembly and it took a lot of time. We introduced compilers, higher-level languages, and now we have AI that can write code. This is a natural progression of capabilities and not the end of programming.” The paradigm shift suggests CIOs will have to revisit their software development lifecycles for significant shifts in skills, practices, and tools. “AI won’t replace agile or DevOps — it’ll supercharge them with standups becoming data-driven, CI/CD pipelines self-optimizing, and QA leaning on AI for test creation and coverage,” says Dominik Angerer, CEO of Storyblok. “Developers shift from coding to curating, business users will describe ideas in natural language, and AI will build functional prototypes instantly. This democratization of development brings more voices into the software process while pushing IT to focus on oversight, scalability, and compliance.”


From Indicators to Insights: Automating Risk Amplification to Strengthen Security Posture

Security analysts don’t want more alerts. They want more relevant ones. Traditional SIEMs generate events using their own internal language that involve things like MITRE tags, rule names and severity scores. But what frontline responders really want to know is which users, systems, or cloud resources are most at risk right now. That’s why contextual risk modeling matters. Instead of alerting on abstract events, modern detection should aggregate risk around assets including users, endpoints, APIs, or services. This shifts the SOC conversation from “What alert fired?” to “Which assets should I care about today?” ... The burden of alert fatigue isn’t just operational but also emotional. Analysts spend hours chasing shadows, pivoting across tools, chasing one-off indicators that lead nowhere. When everything is an anomaly, nothing is actionable. Risk amplification offers a way to reduce the unseen yet heavy weight on security analysts and the emotional toll it can take by aligning high-risk signals to high-value assets and surfacing insights only when multiple forms of evidence converge. Rather than relying on a single failed login or endpoint alert, analysts can correlate chains of activity whether they be login anomalies, suspicious API queries, lateral movement, or outbound data flows – all of which together paint a much stronger picture of risk.


The Immune System of Software: Can Biology Illuminate Testing?

In software engineering, quality assurance is often framed as identifying bugs, validating outputs, and confirming expected behaviour. But similar to immunology, software testing is much more than verification. It is the process of defining the boundaries of the system, training it to resist failure, and learning from its past weaknesses. Like the immune system, software testing should be multi-layered, adaptive, and capable of evolving over time. ... Just as innate immunity is present from biological birth, unit tests should be present from the birth of our code. Just as innate immunity doesn't need a full diagnostic history to act, unit tests don’t require a full system context. They work in isolation, making them highly efficient. But they also have limits: they can't catch integration issues or logic bugs that emerge from component interactions. That role belongs to more evolved layers. ... Negative testing isn’t about proving what a system can do — it’s about ensuring the system doesn’t do what it must never do. It verifies how the software behaves when exposed to invalid input, unauthorized access, or unexpected data structures. It asks: Does the system fail gracefully? Does it reject the bad while still functioning with the good? Just as an autoimmune disease results from a misrecognition of the self, software bugs often arise when we misrecognise what our code should do and what it should not do.


CSO hiring on the rise: How to land a top security exec role

“Boards want leaders who can manage risk and reputation, which has made soft skills — such as media handling, crisis communication, and board or financial fluency — nearly as critical as technical depth,” Breckenridge explains. ... “Organizations are seeking cybersecurity leaders who combine technical depth, AI fluency, and strong interpersonal skills,” Fuller says. “AI literacy is now a baseline expectation, as CISOs must understand how to defend against AI-driven threats and manage governance frameworks.” ... Offers of top pay and authority to CSO candidates obviously come with high expectations. Organizations are looking for CSOs with a strong blend of technical expertise, business acumen, and interpersonal strength, Fuller says. Key skills include cloud security, identity and access management (IAM), AI governance, and incident response planning. Beyond technical skills, “power skills” such as communication, creativity, and problem-solving are increasingly valued, Fuller explains. “The ability to translate complex risks into business language and influence board-level decisions is a major differentiator. Traits such as resilience, adaptability, and ethical leadership are essential — not only for managing crises but also for building trust and fostering a culture of security across the enterprise,” he says.


From legacy to SaaS: Why complexity is the enemy of enterprise security

By modernizing, i.e., moving applications to a more SaaS-like consumption model, the network perimeter and associated on-prem complexity tends to dissipate, which is actually a good thing, as it makes ZTNA easier to implement. As the main entry point into an organization’s IT system becomes the web application URL (and browser), this reduces attackers’ opportunities and forces them to focus on the identity layer, subverting authentication, phishing, etc. Of course, a higher degree of trust has to be placed (and tolerated) in SaaS providers, but at least we now have clear guidance on what to look for when transitioning to SaaS and cloud: identity protection, MFA, and phishing-resistant authentication mechanisms become critical—and these are often enforced by default or at least much easier to implement compared to traditional systems. ... The unwillingness to simplify technology stack by moving to SaaS is then combined with a reluctant and forced move to the cloud for some applications, usually dictated by business priorities or even ransomware attacks (as in the BL case above). This is a toxic mix which increases complexity and reduces the ability for a resource-constrained organization to keep security risks at bay.


Why Metadata Is the New Interface Between IT and AI

A looming risk in enterprise AI today is using the wrong data or proprietary data in AI data pipelines. This may include feeding internal drafts to a public chatbot, training models on outdated or duplicate data, or using sensitive files containing employee, customer, financial or IP data. The implications range from wasted resources to data breaches and reputational damage. A comprehensive metadata management strategy for unstructured data can mitigate these risks by acting as a gatekeeper for AI workflows. For example, if a company wants to train a model to answer customer questions in a chatbot, metadata can be used to exclude internal files, non-final versions, or documents marked as confidential. Only the vetted, tagged, and appropriate content is passed through for embedding and inference. This is a more intelligent, nuanced approach than simply dumping all available files into an AI pipeline. With rich metadata in place, organizations can filter, sort, and segment data based on business requirements, project scope, or risk level. Metadata augments vector labeling for AI inferencing. A metadata management system helps users discover which files to feed the AI tool, such as health benefits documents in an HR chatbot while vector labeling gives deeper information as to what’s in each document.


Ask a Data Ethicist: What Should You Know About De-Identifying Data?

Simply put, data de-identification is removing or obscuring details from a dataset in order to preserve privacy. We can think about de-identification as existing on a continuum... Pseudonymization is the application of different techniques to obscure the information, but allows it to be accessed when another piece of information (key) is applied. In the above example, the identity number might unlock the full details – Joe Blogs of 123 Meadow Drive, Moab UT. Pseudonymization retains the utility of the data while affording a certain level of privacy. It should be noted that while the terms anonymize or anonymization are widely used – including in regulations – some feel it is not really possible to fully anonymize data, as there is always a non-zero chance of reidentification. Yet, taking reasonable steps on the de-identification continuum is an important part of compliance with requirements that call for the protection of personal data. There are many different articles and resources that discuss a wide variety of types of de-identification techniques and the merits of various approaches ranging from simple masking techniques to more sophisticated types of encryption. The objective is to strike a balance between the complexity of the the technique to ensure sufficient protection, while not being burdensome to implement and maintain.


5 ways business leaders can transform workplace culture - and it starts by listening

Antony Hausdoerfer, group CIO at auto breakdown specialist The AA, said effective leaders recognize that other people will challenge established ways of working. Hearing these opinions comes with an open management approach. "You need to ensure that you're humble in listening, but then able to make decisions, commit, and act," he said. "Effective listening is about managing with humility with commitment, and that's something we've been very focused on recently." Hausdoerfer told ZDNET how that process works in his IT organization. "I don't know the answer to everything," he said. "In fact, I don't know the answer to many things, but my team does, and by listening to them, we'll probably get the best outcome. Then we commit to act." ... Bev White, CEO at technology and talent solutions provider Nash Squared, said open ears are a key attribute for successful executives. "There are times to speak and times to listen -- good leaders recognize which is which," she said. "The more you listen, the more you will understand how people are really thinking and feeling -- and with so many great people in any business, you're also sure to pick up new information, deepen your understanding of certain issues, and gain key insights you need."


Beyond Efficiency: AI's role in reshaping work and reimagining impact

The workplace of the future is not about humans versus machines; it's about humans working alongside machines. AI's real value lies in augmentation: enabling people to do more, do better, and do what truly matters. Take recruitment, for example. Traditionally time-intensive and often vulnerable to unconscious bias, hiring is being reimagined through AI. Today, organisations can deploy AI to analyse vast talent pools, match skills to roles with precision, and screen candidates based on objective data. This not only reduces time-to-hire but also supports inclusive hiring practices by mitigating biases in decision-making. In fact, across the employee lifecycle, it personalises experiences at scale. From career development tools that recommend roles and learning paths aligned with individual aspirations, to chatbots that provide real-time HR support, AI makes the employee journey more intuitive, proactive, and empowering. ... AI is not without its challenges. As with any transformative technology, its success hinges on responsible deployment. This includes robust governance, transparency, and a commitment to fairness and inclusion. Diversity must be built into the AI lifecycle, from the data it's trained on to the algorithms that guide its decisions. 

Daily Tech Digest - March 10, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



The Reality of Platform Engineering vs. Common Misconceptions

In theory, the definition of platform engineering is straightforward. It's a practice that involves providing a company's software developers with access to preconfigured toolchains, workflows, and environments, typically through the use of what's called an Internal Developer Platform (IDP). The goal behind platform engineering is also straightforward: It's to help developers work more efficiently and with fewer risks by allowing them to spin up compliant, ready-made solutions whenever they need them, rather than having to implement everything from scratch. ... Misuses of the term platform engineering aren't all that surprising. A similar phenomenon occurred when DevOps entered the tech lexicon in the late 2000s. Instead of universal recognition of DevOps as a distinct philosophy that involves melding software development to IT operations work, some folks effectively began using DevOps as a catch-all term to refer to anything modern or buzzworthy in the realm of software engineering. The same thing seems to be happening now in platform engineering. The term is apparently being used, at least by some professionals, to refer to any work that involves using a platform of some kind within the context of software development.


Why AI needs a kill switch – just in case

How do you develop your “AI kill switch?” The answer lies in protecting securing the entire machine-driven ecosystem that AI depends on. Machine identities, such as digital certificates, access tokens and API keys – authenticate and authorise AI functions and their abilities to interact with and access data sources. Simply put, LLMs and AI systems are built on code, and like any code, they need constant verification to prevent unauthorised access or rogue behaviour. If attackers breach these identities, AI systems can become tools for cybercriminals, capable of generating ransomware, scaling phishing campaigns and sowing general chaos. Machine identity security ensures AI remains trustworthy, even as they scale to interact with complex networks and user bases – tasks that can and will be done autonomously via AI agents. Without strong governance and oversight, companies risk losing visibility into their AI systems, leaving them vulnerable. Attackers can exploit weak security measures, using tactics like data poisoning and backdoor infiltration – threats that are evolving faster than many organisations realise. ... Machine identity security is a critical first step – it establishes trust and resilience in an AI-driven world. This becomes even more urgent as agentic AI takes on autonomous decision-making roles across industries.


Cyber resilience under DORA – are you prepared for the challenge?

Many damaging breaches have originated from within digital supply chains, through third-party vulnerabilities, or from internal weaknesses. In 2023, third-party attacks led to 29% of breaches with 75% of third-party breaches targeting the software and technology supply chain. This evolving threat landscape has forced financial institutions to rethink their approach. The future of cyber resilience isn’t about building higher walls - it’s about securing every layer, inside and out. ... One of the most pressing concerns for financial institutions under DORA is the security of their digital supply chains. High-profile cyberattacks in recent years have demonstrated that vulnerabilities often originate not from within an organization's own IT infrastructure, but through weaknesses in third-party service providers, cloud platforms, and outsourced IT partners. DORA places a strong emphasis on third-party risk management, making it clear that security responsibility extends beyond a firm’s immediate network. Ensuring supply chain resilience requires a proactive and continuous approach. FSIs must conduct regular security assessments of all external vendors, ensuring that partners adhere to the same high standards of cybersecurity and risk management. 


Ask a Data Ethicist: How Can We Ethically Assess the Influence of AI Systems on Humans?

Bezou-Vrakatseli et al provides some guidance in this paper, which outlines the S.H.A.P.E. framework. S.H.A.P.E. stands for secrecy, harm, agency, privacy, and exogeneity. ... If you are not aware that you are being influenced or are unaware of the way in which the influence is taking place, there might be an ethical issue. The idea of intent to influence while keeping that intent a secret, speaks to ideas of deception or trickery. ... You might be wondering – what actually constitutes harm? It’s not just physical harm. There are a range of possible harms including mental health and well being, psychological safety, and representational harms. The authors note that this issue of what is harm – ethically speaking – is contestable, and that lack of consensus can make it difficult to address. ... Human agency has “intrinsic moral value” – that is to say we value it in and of itself. Thus, anything that messes with human agency is generally seen as unethical. There can be exceptions, and we sometimes make these when the human in question might not be able to act in their own best interests. ... Influence may be unethical if there is a violation of privacy. Much has been written about why privacy is valuable and why breaches of privacy are an ethical issue. The authors cite the following – limiting surveillance of citizens, restricting access to certain information, and curtailing intrusions into places deemed private or personal.


Is It Time to Replace Your Server Room with a Data Center?

Rare is the business that starts its IT journey with a full-fledged data center. The more typical route involves creating a server room first, then upgrading to a data center over time as IT needs expand. That raises the question: When should a business replace its server room with a data center? Which performance, security, cost and other considerations should a company weigh when deciding to switch? ... For some companies, the choice between a server room and a data center is clear-cut. A server room best serves small businesses without large-scale IT needs, whereas enterprises typically need a “real” data center. For medium-sized companies, the choice is often less clear. If a business has been getting by for years with just a server room, there is often no single tell-tale sign indicating it’s time to upgrade to a data center. And there is a risk that doing so will cost a lot of money without being necessary. ... A high incidence of server outages or downtime is another good reason to consider moving to a data center. That’s especially true if the outages stem from issues inherent to the nature of the server room – such as power system failures within the entire building, which are less of a risk inside a data center with its own dedicated power source.


How to safely dispose of old tech without leaving a security risk

Printers, especially those with built-in memory or hard drives, can retain copies of documents that were printed or scanned. Routers can store personal information related to network activity, including IP addresses, usernames, and Wi-Fi passwords. Meanwhile, smart TVs, home assistants (like Alexa, Google Home), and smart thermostats may store voice recordings, usage patterns, personal preferences, and even login credentials for streaming services like Netflix and Amazon Prime. As IoT devices become more common, they are increasingly at risk of storing sensitive data. ... Before disposing of a device, it’s essential to completely erase any confidential data. Deleting files or formatting the drive alone isn’t enough, as the data can still be retrieved. The best method for securely wiping data varies depending on the device. ... Windows users can use the “Reset this PC” feature with the option to remove all files and clean the drive, while macOS users can use “Erase Disk” in Disk Utility to securely wipe storage before disposal. Tools like DBAN (Darik’s Boot and Nuke) and BleachBit can also help securely erase data. DBAN is specifically designed to wipe traditional hard drives (HDDs) by completely erasing all stored data. However, it does not support solid-state drives (SSDs), as excessive overwriting can shorten their lifespan.


The great software rewiring: AI isn’t just eating everything; it is everything

Right now, most large language models (LLMs) feel like a Swiss Army knife with infinite tools — exciting but overwhelming. Users don’t want to “figure out” AI. They want solutions, AI agents tailored for specific industries and workflows. Think: legal AI drafting contracts, financial AI managing investments, creative AI generating content, scientific AI accelerating research. Broad AI is interesting. Vertical AI is valuable. Right now, LLMs are too broad, too abstract, too unapproachable for most. A blank chat box is not a product, it is homework. If AI is going to replace applications, it must become invisible, integrating seamlessly into daily workflows without forcing users to think about prompts, settings or backend capabilities. The companies that succeed in this next wave will not just build better AI models, but better AI experiences. The future of computing is not about one AI that does everything. It is about many specialized AI systems that know exactly what users need and execute on that flawlessly. ... The old software model was built on scarcity. Control distribution, limit access, charge premiums. AI obliterates this. The new model is fluid, frictionless,and infinitely scalable.


Cybersecurity: The “What”, the “How” and the “Who” of Change

Cybersecurity is more complex than that: Protecting the firm from cyberthreats requires the ability to reach across corporate silos, beyond IT, towards business and support functions, as well as digitalised supply chains. You can throw as much money as you like to the problem, but if you give it to a technologist CISO to resolve, they will address it as a technology matter. They will put ticks on compliance checklists. They will close down audit points. They will deal with incidents and put out fires. They will deploy countless tools (to the point where this is now becoming a major operational issue). But they will not change the culture of your organisation around business protection and breaches will continue to happen as threats evolve. A lot has been said and written about the role of the “transformational CISO”, but I doubt there are many practitioners in the current generation of CISOs who can successfully wear that mantel. Simply because most have spent the last decade firefighting cyber incidents and have never been able to project a transformative vision over the mid to long-term, let alone deliver it. They have not developed the type of political finesse, of personal gravitas, of leadership in one word, that they would require to be trusted and succeed at delivering a truly transformative agenda across the complex and political silos of the modern enterprise.


CISOs and CIOs forge vital partnerships for business success

“One of the characteristics of a business-aligned CISO is they don’t use the veto card in every instance,” Ijam explains. “When the CISO is at the table and understands the importance of outcomes and deliverables from a business perspective as well as risk management from a security perspective, they are able to pick their battles in a smart way.” Forging a peer CIO/CISO partnership also requires the right set of leaders. While CIOs have been honing a business orientation for years, CISOs need to follow suit, maturing into a role that understands business strategy and is well-versed in the language so they command a seat at the table. “The right CISO leader is someone that doesn’t speak in ones and zeros,” Whiteside says. “They need to be at the table talking in terms that business leaders understand — not about firewalls and malware.” Becoming a C-suite peer also means cultivating an independent voice — important because CIOs and CISOs often have varying points of view, separate priorities, and different tolerances for risk. It’s equally important to make sure the CISO’s voice — and security recommendations — are part of every discussion related to business strategy, IT infrastructure, and critical systems at the beginning, not as an afterthought.


India’s Digital Personal Data Protection Act: A bold step with unfinished business

The release of the draft Digital Personal Data Protection Rules, 2025, on 3rd of January aim to operationalise the provisions of the Act. The Act will undoubtedly go a long way in safeguarding digital personal data. Whilst the benefits to the common citizen are laudable, there are clearly areas of that need to be urgently addressed. ... The draft rules mandate data localisation, restricting the transfer of certain personal data outside India. This approach has faced criticism for potentially increasing operational costs for businesses and creating barriers to global data flows. A flexible approach could be taken with regard to data flows with Friendly and Trusted Nations. Allowing cross-border data transfers to trusted jurisdictions with robust data protection frameworks will position India as a key player in Global trade. India wants to increase exports of goods and services to achieve it’s vision of “Viksit Bharat” by 2047. ... The introduction of clear, technology-driven mechanisms for age verification without being overly intrusive need to be determined. Implementing this rule from a pragmatic perspective will be onerous. Self- declaration may turn out to be a potential way forward, given India’s massive rural population that accesses online services and platforms and the difficulty of implementing parental consent.