Showing posts with label skill-gap. Show all posts
Showing posts with label skill-gap. Show all posts

Daily Tech Digest - November 21, 2025


Quote for the day:

“You live longer once you realize that any time spent being unhappy is wasted.” -- Ruth E. Renkl



DPDP Rules and the Future of Child Data Safety

Most obligations for Data Fiduciaries, including verifiable parental consent, security safeguards, breach notifications, data minimisation, and processing restrictions for children’s data, come into force after 18 months. This means that although the law recognises children’s rights today, full legal protection will not be enforceable until the culmination of the 18-month window. ... Parents’ awareness of data rights, online safety, and responsible technology is the backbone of their informed participation. The government needs to undertake a nationwide Digital Parenting Awareness Campaign with the help of State Education Departments, modelled on literacy and health awareness drives. ... schools often outsource digital functions to vendors without due diligence. Over the next 18 months, they must map where the student data is collected and where it flows, renegotiate contracts with vendors, ensure secure data storage, and train teachers to spot data risks. Nationwide teacher-training programmes should embed digital pedagogy, data privacy, and ethical use of technology as core competencies. ... effective implementation will be contingent on the autonomy, resourcefulness, and accessibility of the Data Protection Board. The regulator should include specialised talent such as cybersecurity specialists and privacy engineers. It should be supported by building an in-house digital forensics unit, capable of investigating leaks, tracing unauthorised access, and examining algorithmic profiling. 


5 best practices for small and medium businesses (SMEs) to strengthen cybersecurity

First, begin with good access control which would entail restricting employees to only the permissions that they specifically require. It is also important to have multi-factor authentication in place, and regularly audit user accounts, particularly when roles shift or personnel depart. Second, keep systems and software current by immediately patching operating systems, applications, and security software to close vulnerabilities before they can be exploited by attackers. Similarly, updates should be automated to avoid human error. The staff are usually at the front line of the defence, so the third essential practice is the continuous ongoing training of employees in identifying phishing attempts, suspicious links, and social engineering methods, making them active guardians of corporate data and effectively cutting the risk of a data breach. Fourth is the safeguarding your data which can be implemented by having regular backups stored safely in multiple places and by complementing them with an explicit disaster recovery strategy, so that you are able to restore operations promptly, reduce downtime, and constrain losses in the event of a cyber attack. Fifth and finally, companies should embrace the layered security paradigm using antivirus tools, firewalls, endpoint protection, encryption, and safe networks. Each of those layers complement each other, creating a resilient defence that protects your digital ecosystem and strengthens trust with partners, customers, and stakeholders.


How Artificial Intelligence is Reshaping the Software Development Life Cycle (SDLC)

With AI tools, workflows become faster and more efficient, giving engineers more time to concentrate on creative innovation and tackling complex challenges. As these models advance, they can better grasp context, learn from previous projects, and adapt to evolving needs. ... AI streamlines software design by speeding up prototyping, automating routine tasks, optimizing with predictive analytics, and strengthening security. It generates design options, translates business goals into technical requirements, and uses fitness functions to keep code aligned with architecture. This allows architects to prioritize strategic innovation and boosts development quality and efficiency. ... AI is shifting developers’ roles from manual coding to strategic "code orchestration." Critical thinking, business insight, and ethical decision-making remain vital. AI can manage routine tasks, but human validation is necessary for security, quality, and goal alignment. Developers skilled in AI tools will be highly sought after. ... AI serves to augment, not replace, the contributions of human engineers by managing extensive data processing and pattern recognition tasks. The synergy between AI's computational proficiency and human analytical judgment results in outcomes that are both more precise and actionable. Engineers are thus empowered to concentrate on interpreting AI-generated insights and implementing informed decisions, as opposed to conducting manual data analysis.


Innovative Approaches To Addressing The Cybersecurity Skills Gap

In a talent-constrained world, forward-leaning organizations aren’t hiring more analysts—they’re deploying agentic AI to generate continuous, cryptographic proof that controls worked when it mattered. This defensible automation reduces breach impact, insurer friction and boardroom risk—no headcount required. ... Create an architecture and engineering review board (AERB) that all current and future technical designs are required to flow through. Make sure the AERB comprises a small group of your best engineers, developers, network engineers and security experts. The group should meet multiple times a year, and all technical staff should be required to rotate through to listen and contribute to the AERB. ... Build security into product design instead of adding it in afterward. Embed industry best practices through predefined controls and policy templates that enforce protection automatically—then partner with trusted experts who can extend that foundation with deep, domain-specific insight. Together, these strategies turn scarce talent into amplified capability. ... Rather than chasing scarce talent, companies should focus on visibility and context. Most breaches stem from unknown identities and unchecked access, not zero days. By strengthening identity governance and access intelligence, organizations can multiply the impact of small security teams, turning knowledge, not headcount, into their greatest defense.


The Configurable Bank: Low‑Code, AI, and Personalization at Scale

What does the present day modern banking system look like: The answer depends on where you stand. For customers, Digital banking solutions need to be instant, invisible, and intuitive – a seamless tap, a scan, a click. For banks, it’s an ever-evolving race to keep pace with rising expectations. ... What was once a luxury i.e. speed and dependability – has become the standard. Yet, behind the sleek mobile apps and fast payments, many banks are still anchored to quarterly release cycles and manual processes that slow innovation. To thrive in this landscape, banks don’t need to rip out their core systems. What they need is configurability – the ability to re-engineer services to be more agile, composable, and responsive. By making their systems configurable rather than fixed, banks can launch products faster, adapt policies in real time, and reduce the cost and complexity of change. ... The idea of the Configurable Bank is built on this shift – where technology, powered by low-code and AI, transforms banking into a living, adaptive platform. One that learns, evolves, and personalizes at scale – not by replacing the core, but by reimagining how it connects with everything around it. ... This is not just a technology shift; it’s a strategic one. With low-code, innovation is no longer the privilege of IT alone. Business teams, product leaders, and even customer-facing units can now shape and deploy digital experiences in near real time. 


Deepfake crisis gets dire prompting new investment, calls for regulation

Kevin Tian, Doppel’s CEO, says that organizations are not prepared for the flood of AI-generated deception coming at them. “Over the past few months, what’s gotten significantly better is the ability to do real-time, synchronous deepfake conversations in an intelligent manner. I can chat with my own deepfake in real-time. It’s not scripted, it’s dynamic.” Tian tells Fortune that Doppel’s mission is not to stamp out deepfakes, but “to stop social engineering attacks, and the malicious use of deepfakes, traditional impersonations, copycatting, fraud, phishing – you name it.” The firm says its R&D team has “just scratched the surface” of innovations it plans to bring to existing and upcoming products, notably in social engineering defense (SED). The Series C funds will “be used to invest in the core Doppel gang to meet the exponential surge in demand.” ... Advocating for “laws that prioritize human dignity and protect democracy,” the piece points to the EU’s AI Act and Digital Services Act as models, and specifically to new copyright legislation in Denmark, which bans the creation of deepfakes without a subject’s consent. In the authors’ words, Denmark’s law would “legally enshrine the principle that you own you.” ... “The rise of deepfake technology has shown that voluntary policies have failed; companies will not police themselves until it becomes too expensive not to do so,” says the piece.


The what, why and how of agentic AI for supply chain management

To be sure, software and automation are nothing new in the supply chain space. Businesses have long used digital tools to help track inventories, manage fleet schedules and so on as a way of boosting efficiency and scalability. Agentic AI, however, goes further than traditional SCM software tools, offering capabilities that conventional systems lack. For instance, because agents are guided by AI models, they are capable of identifying novel solutions to challenges they encounter. Traditional SCM tools can’t do this because they rely on pre-scripted options and don’t know what to do when they encounter a scenario no one envisioned beforehand. AI can also automate multiple, interdependent SCM processes, as I mentioned above. Traditional SCM tools don’t usually do this; they tend to focus on singular tasks that, although they may involve multiple steps, are challenging to automate fully because conventional tools can’t reason their way through unforeseen variables in the way AI agents do. ... Deploying agents directly into production is enormously risky because it can be challenging to predict what they’ll do. Instead, begin with a proof of concept and use it to validate agent features and reliability. Don’t let agents touch production systems until you’re deeply confident in their abilities. ... For high-stakes or particularly complex workflows, it’s often wise to keep a human in the loop.


How AI can magnify your tech debt - and 4 ways to avoid that trap

The survey, conducted in September, involved 123 executives and managers from large companies. There are high hopes that AI will help cut into and clear up issues, along with cost reduction. At least 80% expect productivity gains, and 55% anticipate AI will help reduce technical debt. However, the large segment expecting AI to increase technical debt reflects "real anxiety about security, legacy integration, and black-box behavior as AI scales across the stack," the researchers indicated. Top concerns include security vulnerabilities (59%), legacy integration complexity (50%), and loss of visibility (42%). ... "Technical debt exists at many different levels of the technology stack," Gary Hoberman, CEO of Unqork, told ZDNET. "You can have the best 10X engineer or the best AI model writing the most beautiful, efficient code ever seen, but that code could still be running on runtimes that are themselves filled with technical debt and security issues. Or they may also be relying on open-source libraries that are no longer supported." ... AI presents a new raft of problems to the tech debt challenge. The rising use of AI-assisted code risks "unintended consequences, such as runaway maintenance costs and increasing tech debt," Hoberman continued. IT is already overwhelmed with current system maintenance.


The State and Current Viability of Real-Time Analytics

Data managers now prefer real-time analytical capabilities built within their applications and systems, rather than a separate, standalone, or bolted-on proj­ect. Interest in real-time analytics as a standalone effort has dropped from 50% to 32% during the past 2 years, a recent survey of 259 data managers conducted by Unisphere Research finds ... So, the question becomes: Are real-time analytics ubiqui­tous to the point in which they are automatically integrated into any and all applications? By now, the use of real-time analyt­ics should be a “standard operating requirement” for customer experience, said Srini Srinivasan, founder and CTO at Aero­spike. This is where the rubber meets the road—where “the majority of the advances in real-time applications have been made in consumer-oriented enterprises,” he added. Along these lines, the most prominent use cases for real-time analytics include “risk analysis, fraud detection, recommenda­tion engines, user-based dynamic pricing, dynamic billing and charging, and customer 360,” Srinivasan continued. “For over a decade, these systems have been using AI and machine learning [ML], inferencing for improving the quality of real-time deci­sions to improve customer experience at scale. The goal is to ensure that the first customer and the hundred-millionth cus­tomer have the same vitality of customer experience.” ... “Within industries such as energy, life sciences, and chemicals, the next decade of real-time analytics will be driven by more autono­mous operations,” said David Streit


You Down with EDD? Making Sense of LLMs Through Evaluations

We're facing a major infrastructure maturity gap in AI development — the same gap the software world faced decades ago when applications grew too complex for informal testing and crossed fingers. Shipping fast with user feedback works early on, but when done at scale with rising stakes, "vibes" break down and developers demand structure, predictability, and confidence in their deployments. ... AI engineering teams are turning to an emerging solution: evaluation-driven development (EDD), the probabilistic cousin to TDD. An evaluation looks similar to a traditional software test. You have an assertion, a response, and pass-fail criteria, but instead of asking "Does this function return 42?" you're asking "Does this legal AI application correctly flag the three highest-risk clauses in this nightmare of a merger agreement?" Our trust in AI systems comes from our trust in the evaluations themselves, and if you never see an evaluation fail, you're not testing the right behaviors. The practice of Evaluation-Driven Development (EDD) is about repeatedly testing these evaluations. ... The technology for EDD is ready. Modern AI platforms provide solid evaluation frameworks that integrate with existing development workflows, but the challenge facing wide adoption is cultural. Teams need to embrace the discipline of writing evaluations before changing systems, just like they learned to write tests before shipping code. It requires a mindset shift from "move fast and break things," to "move deliberately and measure everything."

Daily Tech Digest - October 16, 2025


Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle



Major network vendors team to advance Ethernet for scale-up AI networking

“AI workloads are re-shaping modern data center architectures, and networking solutions must evolve to meet the growing demands,” wrote Martin Lund, executive vice president of Cisco’s common hardware group, in a blog post about the news. “ESUN brings together AI infrastructure operators and vendors to align on open standards, incorporate best practices, and accelerate innovation in Ethernet solutions for scale-up networking.” ESUN will focus solely on open, standards-based Ethernet switching and framing for scale-up networking—excluding host-side stacks, non-Ethernet protocols, application-layer solutions, and proprietary technologies. The group will expand the development and interoperability of XPU network interfaces and Ethernet switch ASICs for scale-up networks, the OCP stated in a blog: “The Initial focus will be on L2/L3 Ethernet framing and switching, enabling robust, lossless, and error-resilient single-hop and multi-hop topologies.” ... “Scale-Up” AI fabrics (SAIF) provide high-bandwidth, low-latency physical network interconnectivity and enhanced memory interaction between nearby AI processors,” Garter wrote. “Current implementations of SAIF are vendor-proprietary platforms, and there are proximity limitations (typically, SAIF is confined to only a rack or row). In most scenarios, Gartner recommends using Ethernet when connecting multiple SAIF systems together. We believe the scale, performance and supportability of Ethernet is optimal.”


Moving Beyond Awareness: How Threat Hunting Builds Readiness

The best defense begins before the first alert. Proactive threat hunting identifies the conditions that allow an attack to form and addresses them early. It moves security from passive observation to a clear understanding of where exposure originates. This move from observation to proactive understanding forms the core of a modern security program: Continuous Threat Exposure Management (CTEM). Instead of a one-time project, a CTEM program provides a structured, repeatable framework to continuously model threats, validate controls, and secure the business. For organizations ready to build this capability, A Practical Guide to Getting Started With CTEM offers a clear roadmap. ... Security Awareness Month reminds us that awareness is an essential step. Yet real progress begins when awareness leads to action. Awareness is only as powerful as the systems that measure and validate it. Proactive threat hunting turns awareness into readiness by keeping attention fixed on what matters most - the weak points that form the basis for tomorrow's attacks. Awareness teaches people to see risk. Threat hunting proves whether the risk still exists. Together they form a continuous cycle that keeps security viable long after awareness campaigns end. This October, the question for every organization is not how many employees completed the training, but how confident you are that your defenses would hold today if someone tested them. Awareness builds understanding. Readiness delivers protection.


Beyond the checklist: Building adaptive GRC frameworks for agentic AI

We must move GRC governance from a periodic, human-driven activity to an adaptive, continuous and context-aware operational capability embedded directly within the agentic AI platform. The first critical step involves implementing real-time governance and telemetry. This means we stop relying solely on endpoint logs that only tell us what the agent did and instead focus on integrating monitoring into the agent’s operating environment to capture why and how. ... The RCV is a structured, cryptographic record of the factors that drove the agent’s choice. It includes not just the data inputs, but also the specific model parameters, the weighted objectives used at that moment, the counterfactuals considered and, crucially, the specific GRC constraints the agent accessed and applied during its deliberation. ... Finally, we must address the “big red button” problem inherent in human-in-the-loop override. For agentic AI, this button cannot be a simple off switch, which would halt critical operations and cause massive disruption. The override must be non-obstructive and highly contextual, as detailed in OECD Principles on AI: Accountability and human oversight. ... We are entering an era where our systems will act on our behalf with little or no human intervention. My priority — and yours — must be to ensure that the autonomy of the AI does not translate into an absence of accountability.


Beyond Productivity: AI’s Role in Creating Hyper-Personalized and Inclusive Employee Experiences

Generative AI enhances employee experiences by analyzing unstructured information, understanding natural language and interpreting intent. Agentic AI takes this further by acting as a centralized, intelligent interface – integrating data sources, maintaining contextual awareness, adapting to individual goals and autonomously executing tasks – minimizing the need for employees to navigate multiple systems or support channels. From onboarding to learning, wellness, feedback, and career progression, it provides a seamless connected experience. Furthermore, AI systems can continuously learn from an employee’s behavior, preferences, and goals to provide real-time, tailored experiences. ... As powerful as AI is, it’s success in employee experience hinges on how well it aligns with human-centric values. Personalization must never feel intrusive, and inclusivity efforts must be grounded in empathy, transparency, and consent. Enterprises must adopt a responsible AI approach – ensuring fairness, explainability, and ethical data use. Employees should have clarity on how AI systems work, how data is used, and how decisions are made. Moreover, they should always have the option to challenge or override AI-driven outcomes. Leadership, HR, and IT teams must work together to create governance frameworks that reinforce trust – because even the most advanced AI fails if employees don’t feel seen, respected, and safe.


5 ideas to help bridge the genAI skills gap

Instead of focusing narrowly on technical skills, UST has shifted its training toward cultivating adaptable mindsets. “We want to develop curiosity, critical thinking, and creativity — skills that aren’t easily replaced by AI,” said Prasad, stressing that traditional classroom-style learning is insufficient when the competitive environment demands experimentation and rapid application. Employees are given access to a range of AI tools such as GitHub Copilot, Google Gemini, and Cursor, and encouraged to experiment safely in R&D environments. ... Rather than pulling people out of their daily job for separate training sessions, the company embeds training directly into daily workflows at the points where people are likely to be confronted with the need for learning material. Digital adoption platforms like Whatfix provide in-system nudges and tips directly in the tools recruiters use, guiding them in real time. Recruiting system training is integrated within the application. Users don’t know they’re interacting with a digital coach that’s training them to use the system and its AI features, such as candidate sourcing, resume analysis, and client outreach, effectively. According to Busch, the payoff is measurable: “How-to” support questions have been reduced 95% since implementing workflow learning.


Digital transformation works best when co-owned — but only if you do it right

All too often, the CIO has gone in alone to the CFO, CEO, or board to argue the benefits of a digital project in order to obtain funding. A sounder approach is to confirm the need for a digital solution to a particular business problem with the CxO in charge of that business area, and to then go in together to the budget meeting so that both the technology and the business values can be effectively presented. Secondly, there is no reason the IT budget must bear the full costs of a co-owned project. ... A first step for CxOs and CIOs toward a new, unified value creation paradigm is to root out the historical roadblocks that stand in the way of executive cooperation. CxOs must fully engage in digital projects from start to finish, and CIOs must be willing to accept co-star (instead of star) billing in projects. Most CIOs are making this shift in thinking, but CxOs still lag in project participation. Second, CIOs must gain CxO hard-dollar budget commitments for digital projects. When both co-fund and advocate for digital projects in front of the board, CEO, and CFO, both have skin in the game. Third, co-assign executive leadership responsibilities for key project milestones. The CxO might be responsible for defining the business use case and what a specific digital solution must deliver, while the CIO might be responsible for developing the solution.


Australian legislators spar with platforms, each other over age assurance laws

If there’s one thing every platform can agree on when it comes to age assurance, it’s that biometric age verification measures are a good idea – but probably just not for them. The latest to suggest that maybe they aren’t subject to the law are TikTok and Snapchat. The companies have reportedly made the case to Australia’s eSafety Commissioner that there are potential legal workarounds to Australia’s incoming social media regulations, which will prohibit users under 16 from having accounts. ... “We’re doing these things, ultimately, for the good of young people in Australia. It will span television, radio, digital. There will be some on billboards near schools around the country. They’ll see it on TV. They’ll see it online. They’ll see it, ironically, on social media, because until the 10th of December, it is legal for kids to be on social media. And if that’s where they are, that’s where we need to talk to them about what this means and why we’re doing it.” ... There is, in questioning from Senator David Shoebridge of the Australian Greens, an apparent desire to assign blame to age verification providers. He argues that Australia’s privacy laws aren’t yet ready to accommodate such data collection, in that Australia’s 1988 Privacy Act doesn’t include requirements for the deletion of data. He asks about workarounds, like masks and VPNs.


5 Must-Follow Rules of Every Elite SOC: CISO’s Checklist

Even the best analysts can’t detect everything alone. When communication breaks down and teams work in silos, critical context slips away; alerts are missed, work gets repeated, and investigations slow to a crawl. That’s why collaboration has become a core part of modern SOC performance. Inside the ANY.RUN sandbox, the Teamwork feature lets analysts join the same live workspace, share results in real time, and coordinate across roles without switching tools. Team leads can assign tasks, monitor progress, and track productivity; all from a single interface that keeps the team aligned, no matter the time zone. ... Every SOC knows the feeling; too many alerts, too many clicks, not enough time. Analysts lose hours on repetitive actions: opening files, running scripts, clicking through pop-ups, or solving CAPTCHAs just to trigger hidden payloads. With Automated Interactivity inside the ANY.RUN sandbox, all those steps happen automatically. The system opens malicious links hidden behind QR codes, interacts with fake installers, solves CAPTCHAs, and performs other routine actions; no human input needed. The sandbox handles these interactions on its own, exposing every stage of the attack chain in a fraction of the time. ... Even the best detection tools miss things. False negatives happen all the time; a file marked “safe” can still hide malicious behavior deep in its code or trigger only under specific conditions.


Identifying risky candidates: Practical steps for security leaders

Today’s fraudsters and malicious insiders often leave digital breadcrumbs outside a traditional organization’s direct visibility. Hiring teams cannot connect those breadcrumbs on their own, and they should partner with the security team to surface hidden affiliations, past fraudulent activities, or concerning behavioral patterns as a part of the overall candidate assessment. ... Outside-the-firewall checks are especially important in a remote or hybrid work environment where face-to-face verification is limited. The practical takeaway is that companies need to broaden their visibility: the more you combine traditional HR processes with external digital risk signals and collaborate across internal teams, the harder it becomes for a fraudulent candidate to work within your company undetected. ... Employees under stress or facing job insecurity may become more prone to misconduct, either through negligence or malice. Those with declining performance reviews, who are facing disciplinary action, or that have presented resistance to security upgrades are worth closer scrutiny. Employees that give notice of resignation should be keenly watched for unauthorized activity. ... The definition of insider threat is shifting. Where once the focus was on accidental misconfigurations or negligence, today it increasingly includes malicious acts, fraud, and hybrid cases where dissatisfaction or personal pressures drive risky behavior.


CISO Conversations: Are Microsoft’s Deputy CISOs a Signpost to the Future?

Microsoft may be unique in its size and complexity. But the difficulties faced by its CISO, Igor Tsyganskiy, are the same as those faced by all CISOs – just writ much larger. The expansion of the CISO role from governance (security), to include compliance (legal), internal app and external product development (engineering), integration with business leaders (business knowledge and communication skills), artificial intelligence (data scientist) and more, implies the solution adopted Tsyganskiy should be considered by all CISOs. ... It is encouraging that both top Microsoft dCISOs believe that such career success can be achieved by anyone with the right attitude. “Personally, I like to understand technology to a deep level. But it isn’t absolutely essential,” explains Russinovich. “You can delegate things, just like Igor is delegating his need for deep understanding of everything to a pool of dCISOs. Some level of technical understanding will always be crucial, because otherwise you’re just completely disconnected. But I think you can be an effective CISO without being as technically deep as I personally like to be.” Johnson agrees that you can have a successful career in cyber without prior cyber qualifications. “You need to have the aptitude. You need to be willing to learn every day. You need to be willing to accept what you don’t know, and you need to network,” she says.

Daily Tech Digest - September 14, 2025


Quote for the day:

"Courage doesn't mean you don't get afraid. Courage means you don't let fear stop you." -- Bethany Hamilton


The first three things you’ll want during a cyberattack

The first wave of panic a cyberattack comes from uncertainty. Is it ransomware? A phishing campaign? Insider misuse? Which systems are compromised? Which are still safe? Without clarity, you’re guessing. And in cybersecurity, guesswork can waste precious time or make the situation worse. ... Clarity transforms chaos into a manageable situation. With the right insights, you can quickly decide: What do we isolate? What do we preserve? What do we shut down right now? The MSPs and IT teams that weather attacks best are the ones who can answer those questions without delays. ... Think of it like firefighting: Clarity tells you where the flames are, but control enables you to prevent the blaze from consuming the entire building. This is also where effective incident response plans matter. It’s not enough to have the tools; you need predefined roles, playbooks and escalation paths so your team knows exactly how to assert control under pressure. Another essential in this scenario is having a technology stack with integrated solutions that are easy to manage. ... Even with visibility and containment, cyberattacks can leave damage behind. They can encrypt data and knock systems offline. Panicked clients demand answers. At this stage, what you’ll want most is a lifeline you can trust to bring everything back and get the organization up and running again.


Emotional Blueprinting: 6 Leadership Habits To See What Others Miss

Most organizations use tools like process mapping, journey mapping, and service blueprinting. All valuable. But often, these efforts center on what needs to happen operationally—steps, sequences, handoffs. Even journey maps that include emotional states tend to track generalized sentiment (“frustrated,” “confused”) at key stages. What’s often missing is an observational discipline that reveals emotional nuance in real time. ... People don’t just come to get things done. They come with emotional residue—worries, power dynamics, pride, shame, hope, exhaustion. And while you may capture some of this through traditional tools, observation fills in what the tools can’t name. ... Set aside assumptions and resist the urge to explain. Just watch. Let insight come without forcing interpretation. ... Focus on micro-emotions in the moment, then pull back to observe the emotional arc of a journey. ... Observe what happens in thresholds—hallways, entries, exits, loading screens. These in-between moments often hold the strongest emotional cues. ... Track how people react, not just what they do. Does their behavior show trust, ease, confusion, or hesitance? ... Trace where momentum builds—or breaks. Energy flow is often a more reliable signal than feedback forms.


Cloud security gaps widen as skills & identity risks persist

According to the report, today's IT environment is increasingly complicated. The data shows that 82% of surveyed organisations now operate hybrid environments, and 63% make use of multiple cloud providers. As the use of cloud services continues to expand, organisations are required to achieve unified security visibility and enforce consistent security policies across fragmented platforms. However, the research found that most organisations currently lack the necessary controls to manage this complexity. This deficiency is leading to blind spots that can be exploited by attackers. ... The research identifies identity management as the central vulnerability in current cloud security practices. A majority of respondents (59%) named insecure identities and permissions as their primary cloud security concern. ... "Identity has become the cloud's weakest link, but it's being managed with inconsistent controls and dangerous permissions. This isn't just a technical oversight; it's a systemic governance failure, compounded by a persistent expertise gap that stalls progress from the server room to the boardroom. Until organisations get back to basics, achieving unified visibility and enforcing rigorous identity governance, they will continue to be outmanoeuvred by attackers," said Liat Hayun, VP of Product and Research at Tenable.


Biometrics inspire trust, policy-makers invite backlash

The digital ID ambitions of the EU and World are bold, the adoption numbers still to come, they hope. Romania is reducing the number of electronic identity cards it is planning to issue for free by a million and a half following a cut to the project’s budget. It risks fines that eventually in theory could stretch into hundreds of millions of euros for missing the EU’s digital ID targets. World now gives fans of IDs issued by the private sector, iris biometrics, decentralized systems and blockchain technologies an opportunity to invest in them on the NASDAQ. ... An analysis of the Online Safety Act by the ITIF cautions that any attempt to protect children from online harms invites backlash if it blocks benign content, or if it isn’t crystal clear about the lines between harmful and legal content. Content that promotes self-harm is being made illegal in the UK under the OSA, shifting the responsibility of online platforms from age assurance to content moderation. By making the move under the OSA, new UK Tech Secretary Liz Kendall risks strengthening arguments that the government is surreptitiously increasing censorship.  Her predecessor Peter Kyle, having presided over the project so far, now gets to explain it to the American government as Trade Secretary. Domestically, more children than adults consider age checks effective, survey respondents tell Sumsub, but nearly half of UK consumers worry about the OSA leading to censorship.


How to make your people love change

The answer lies in a core need every person has: self-concordance. When change is aligned with a person’s aspirations, values, and purpose, they are more likely to embrace it. To make that happen, we need a mindset shift. This needs to happen at two levels. ... The first thing to consider is that we have to think of employees not as objects of change but as internal customers. Just like marketers try to study consumer behaviour and aspirations with deep granularity, we must try to understand employees in similar detail. And not just see them as professionals but as individuals. ... Second, it meets the employees where they are, instead of trying to push them towards an agenda. And third, and most importantly, it makes them not just invested in the change process but turns them into the change architects. What these architects will build may not be the same as what we want them to, but there will be some overlaps. And because we empowered them to do this, they become fellow travelers, and this creates a positive change momentum, which we can harvest to effect the changes we want as well. ... We worked with a client where there was a need to get out of excessively critical thinking—a practice that had kept them compliant and secure, but was now coming in the way of growth—and move towards a more positive culture. 


Cloud-Native Security in 2025: Why Runtime Visibility Must Take Center Stage

For years, cloud security has leaned heavily on preventative controls like code scanning, configuration checks, and compliance enforcement. While essential, these measures provide only part of the picture. They identify theoretical risks, but not whether those risks are active and exploitable in production. Runtime visibility fills that gap. By observing what workloads are actually running — and how they behave — security teams gain the highest fidelity signal for prioritizing threats. ... Modern enterprises face an avalanche of alerts across vulnerability scanners, cloud posture tools, and application security platforms. The volume isn't just overwhelming — it's unsustainable. Analysts often spend more time triaging alerts than actually fixing problems. To be effective, organizations must map vulnerabilities and misconfigurations to:The workloads that are actively running. The business applications they support. The teams responsible for fixing them. This alignment is critical for bridging the gap between security and development. Developers often see security findings as disruptive, low-context interruptions. ... Another challenge enterprises face is accountability. Security findings are only valuable if they reach the right owner with the right context. Yet in many organizations, vulnerabilities are reported without clarity about which team should fix them.


Want to get the most out of agentic AI? Get a good governance strategy in place

The core challenge for CIOs overseeing agentic AI deployments will lie in ensuring that agentic decisions remain coherent with enterprise-level intent, without requiring constant human arbitration. This demands new governance models that define strategic guardrails in machine-readable logic and enforce them dynamically across distributed agents. ... Agentic agents in the network, especially those retrained or fine-tuned locally, may fail to grasp the nuance embedded in these regulatory thresholds. Worse, their decisions might be logically correct yet legally indefensible. Enterprises risk finding themselves in court arguing the ethical judgment of an algorithm. The answer lies in hybrid intelligence: pairing agents’ speed with human interpretive oversight for edge cases, while developing agentic systems capable of learning the contours of ambiguity. ... Enterprises must build policy meshes that understand where an agent operates, which laws apply, and how consent and access should behave across borders. Without this, global companies risk creating algorithmic structures that are legal in no country at all. In regulated industries, ethical norms require human accountability. Yet agent-to-agent systems inherently reduce the role of the human operator. This may lead to catastrophic oversights, even if every agent performs within parameters.


The Critical Role of SBOMs (Software Bill of Materials) In Defending Medtech From Software Supply Chain Threats

One of the primary benefits of an SBOM is enhanced transparency and traceability. By maintaining an accurate and up-to-date inventory of all software components, organizations can trace the origin of each component and monitor any changes or updates. ... SBOMs play a vital role in vulnerability management. By knowing exactly what components are present in their software, organizations can quickly identify and address vulnerabilities as they are discovered. Automated tools can scan SBOMs against known vulnerability databases, alerting organizations to potential risks and enabling timely remediation. ... For medical device manufacturers, compliance with regulatory requirements is paramount. Regulatory bodies, such as the U.S. FDA (Federal Drug Administration) and the EMA (European Medicines Agency), have recognized the importance of SBOMs in ensuring the security and safety of medical devices. ... As part of this regulatory framework, the FDA emphasizes the importance of incorporating cybersecurity measures throughout the product lifecycle, from design and development to post-market surveillance. One of the critical components of this guidance is the inclusion of an SBOM in premarket submissions. The SBOM serves as a foundational element in identifying and managing cybersecurity risks. The FDA’s requirement for an SBOM is not just about listing software components; it’s about promoting a culture of transparency and accountability within the medical device industry.


Shedding light on Shadow AI: Turning Risk to Strategic Advantage

The fact that employees are adopting these tools on their own tells us something important: they are eager for greater efficiency, creativity, and autonomy. Shadow AI often emerges because enterprise tools lag what’s available in the consumer market, or because official processes can’t keep pace with employee needs. Much like the early days of shadow IT, this trend is a response to bottlenecks. People want to work smarter and faster, and AI offers a tempting shortcut. The instinct of many IT and security teams might be to clamp down, block access, issue warnings, and attempt to regain control. ... Employees using AI independently are effectively prototyping new workflows. The real question isn’t whether this should happen, but how organisations can learn from and build on these experiences. What tools are employees using? What are they trying to accomplish? What workarounds are they creating? This bottom-up intelligence can inform top-down strategies, helping IT teams better understand where existing solutions fall short and where there’s potential for innovation. Once shadow AI is recognised, IT teams can move from a reactive to a proactive stance, offering secure, compliant alternatives and frameworks that still allow for experimentation. This might include vetted AI platforms, sandbox environments, or policies that clarify appropriate use without stifling initiative.


Why Friction Should Be a Top Consideration for Your IT Team

Some friction can be good, such as access controls that may require users to take a few seconds to authenticate their identities but that help to secure sensitive data, or change management processes that enable new ways of doing business. By contrast, bad friction creates delays and stress without adding value. Users may experience bad friction in busywork that delivers little value to an organization, or in provisioning delays that slow down important projects. “You want to automate good friction wherever possible,” Waddell said. “You want to eliminate bad friction.” ... As organizations work to eliminate friction, they can explore new approaches in key areas. The use of platform engineering lessens friction in multiple ways, enabling organizations to reduce the time needed to bring new products and services to market. Further, it can help organizations take advantage of automation and standardization while also cutting operational overhead. Establishing cyber resilience is another important way to remove friction. Organizations certainly want to avoid the massive friction of a data breach, but they also want to ensure that they can minimize the impact of a breach and enable faster incident response and recovery. “AI threats will outpace our ability to detect them,” Waddell said. “As a result, resilience will matter more than prevention.”

Daily Tech Digest - June 17, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley



Understanding how data fabric enhances data security and governance

“The biggest challenge is fragmentation; most enterprises operate across multiple cloud environments, each with its own security model, making unified governance incredibly complex,” Dipankar Sengupta, CEO of Digital Engineering Services at Sutherland Global told InfoWorld. ... Shadow IT is also a persistent threat and challenge. According to Sengupta, some enterprises discover nearly 40% of their data exists outside governed environments. Proactively discovering and onboarding those data sources has become non-negotiable. ... A data fabric deepens organizations’ understanding and control of their data and consumption patterns. “With this deeper understanding, organizations can easily detect sensitive data and workloads in potential violation of GDPR, CCPA, HIPAA and similar regulations,” Calvesbert commented. “With deeper control, organizations can then apply the necessary data governance and security measures in near real time to remain compliant.” ... Data security and governance inside a data fabric shouldn’t just be about controlling access to data, it should also come with some form of data validation. The cliched saying “garbage-in, garbage-out” is all too true when it comes to data. After all, what’s the point of ensuring security and governance on data that isn’t valid in the first place?


AI isn’t taking your job; the big threat is a growing skills gap

While AI can boost productivity by handling routine tasks, it can’t replace the strategic roles filled by skilled professionals, Vianello said. To avoid those kinds of issues, agencies — just like companies — need to invest in adaptable, mission-ready teams with continuously updated skills in cloud, cyber, and AI. The technology, he said, should augment – not replace — human teams, automating repetitive tasks while enhancing strategic work. Success in high-demand tech careers starts with in-demand certifications, real-world experience, and soft skills. Ultimately, high-performing teams are built through agile, continuous training that evolves with the tech, Vianello said. “We train teams to use AI platforms like Copilot, Claude and ChatGPT to accelerate productivity,” Vianello said. “But we don’t stop at tools; we build ‘human-in-the-loop’ systems where AI augments decision-making and humans maintain oversight. That’s how you scale trust, performance, and ethics in parallel.” High-performing teams aren’t born with AI expertise; they’re built through continuous, role-specific, forward-looking education, he said, adding that preparing a workforce for AI is not about “chasing” the next hottest skill. “It’s about building a training engine that adapts as fast as technology evolves,” he said.


Got a new password manager? Don't leave your old logins exposed in the cloud - do this next

Those built-in utilities might have been good enough for an earlier era, but they aren't good enough for our complex, multi-platform world. For most people, the correct option is to switch to a third-party password manager and shut down all those built-in password features in the browsers and mobile devices you use. Why? Third-party password managers are built to work everywhere, with a full set of features that are the same (or nearly so) across every device. After you make that switch, the passwords you saved previously are left behind in a cloud service you no longer use. If you regularly switch between browsers (Chrome on your Mac or Windows PC, Safari on your iPhone), you might even have multiple sets of saved passwords scattered across multiple clouds. It's time to clean up that mess. If you're no longer using a password manager, it's prudent to track down those outdated saved passwords and delete them from the cloud. I've studied each of the four leading browsers: Google Chrome, Apple's Safari, Microsoft Edge, and Mozilla Firefox. Here's how to find the password management settings for each one, export any saved passwords to a safe place, and then turn off the feature. As a final step, I explain how to purge saved passwords and stop syncing.


AI and technical debt: A Computer Weekly Downtime Upload podcast

Given that GenAI technology hit the mainstream with GPT 4 two years ago, Reed says: “It was like nothing ever before.” And while the word “transformational” tends to be generously overused in technology he describes generative AI as “transformational with a capital T.” But transformations are not instant and businesses need to understand how to apply GenAI most effectively, and figure out where it does and does not work well. “Every time you hear anything with generative AI, you hear the word journey and we're no different,” he says. “We are trying to understand it. We're trying to understand its capabilities and understand our place with generative AI,” Reed adds. Early adopters are keen to understand how to use GenAI in day-to-day work, which, he says, can range from being an AI-based work assistant or a tool that changes the way people search for information to using AI as a gateway to the heavy lifting required in many organisations. He points out that bet365 is no different. “We have a sliding scale of ambition, but obviously like anything we do in an organisation of this size, it must be measured, it must be understood and we do need to be very, very clear what we're using generative AI for.” One of the very clear use cases for GenAI is in software development. 


Cloud Exodus: When to Know It's Time to Repatriate Your Workloads

Because of the inherent scalability of cloud resources, the cloud makes a lot of sense when the compute, storage, and other resources your business needs fluctuate constantly in volume. But if you find that your resource consumption is virtually unchanged from month to month or year to year, you may not need the cloud. You may be able to spend less and enjoy more control by deploying on-prem infrastructure. ... Cloud costs will naturally fluctuate over time due to changes in resource consumption levels. It's normal if cost increases correlate with usage increases. What's concerning, however, is a spike in cloud costs that you can't tie to consumption changes. It's likely in that case that you're spending more either because your cloud service provider raised its prices or your cloud environment is not optimized from a cost perspective. ... You can reduce latency (meaning the delay between when a user requests data on the network and when it arrives) on cloud platforms by choosing cloud regions that are geographically proximate to your end users. But that only works if your users are concentrated in certain areas, and if cloud data centers are available close to them. If this is not the case, you are likely to run into latency issues, which could dampen the user experience you deliver. 


The future of data center networking and processing

The optical-to-electrical conversion that is performed by the optical transceiver is still needed in a CPO system, but it moves from a pluggable module located at the faceplate of the switching equipment to a small chip (or chiplet) that is co-packaged very closely to the target ICs inside the box. Data center chipset heavyweights Broadcom and Nvidia have both announced CPO-based data center networking products operating at 51.2 and 102.4 Tb/s. ... Early generation CPO systems, such as those announced by Broadcom and Nvidia for Ethernet switching, make use of high channel count fiber array units (FAUs) that are designed to precisely align the fiber cores to their corresponding waveguides inside the PICs. These FAUs are challenging to make as they require high fiber counts, mixed single-mode (SM) and polarization maintaining (PM) fibers, integration of micro-optic components depending on the fiber-to-chip coupling mechanism, highly precise tolerance alignments, CPO-optimized fibers and multiple connector assemblies.  ... In addition to scale and cost benefits, extreme densities can be achieved at the edge of the PIC by bringing the waveguides very close together, down to about 30µm, which is far more than what can be achieved with even the thinnest fibers. Next generation fiber-to-chip coupling will enable GPU optics – which will require unprecedented levels of density and scale.


Align AI with Data, Analytics and Governance to Drive Intelligent, Adaptive Decisions and Actions Across the Organisation

Unlocking AI’s full business potential requires building executive AI literacy. They must be educated on AI opportunities, risks and costs to make effective, future-ready decisions on AI investments that accelerate organisational outcomes. Gartner recommends D&A leaders introduce experiential upskilling programs for executives, such as developing domain-specific prototypes to make AI tangible. This will lead to greater and more appropriate investment in AI capabilities. ... Using synthetic data to train AI models is now a critical strategy for enhancing privacy and generating diverse datasets. However, complexities arise from the need to ensure synthetic data accurately represents real-world scenarios, scales effectively to meet growing data demand and integrates seamlessly with existing data pipelines and systems. “To manage these risks, organisations need effective metadata management,” said Idoine. “Metadata provides the context, lineage and governance needed to track, verify and manage synthetic data responsibly, which is essential to maintaining AI accuracy and meeting compliance standards.” ... Building GenAI models in-house offers flexibility, control and long-term value that many packaged tools cannot match. As internal capabilities grow, Gartner recommends organisations adopt a clear framework for build versus buy decisions. 


Do microServices' Benefits Supersede Their caveats? A Conversation With Sam Newman

A microservice is one of those where it is independently deployable so I can make a change to it and I can roll out new versions of it without having to change any other part of my system. So things like avoiding shared databases are really about achieving that independent deployability. And it's a really simple idea that can be quite easy to implement if you know about it from the beginning. It can be difficult to implement if you're already in a tangled mess. And that idea of independent deployability has interesting benefits because the fact that something is independently deployable is obviously useful because it's low impact releases, but there's loads of other benefits that start to flow from that. ... The vast majority of people who tell me they've scaling issues often don't have them. They could solve their scaling issues with a monolith, no problem at all, and it would be a more straightforward solution. They're typically organizational scale issues. And so, for me, what the world needs from our IT's product-focused, outcome-oriented, and more autonomous teams. That's what we need, and microservices are an enabler for that. Having things like team topologies, which of course, although the DevOps topology stuff was happening around the time of my first edition of my book, that being kind of moved into the team topology space by Matthew and Manuel around the second edition again sort of helps kind of crystallize a lot of those concepts as well.


Why Businesses Must Upgrade to an AI-First Connected GRC System

Adopting a connected GRC solution enables organizations to move beyond siloed operations by bringing risk and compliance functions onto a single, integrated platform. It also creates a unified view of risks and controls across departments, bringing better workflows and encouraging collaboration. With centralized data and shared visibility, managing complex, interconnected risks becomes far more efficient and proactive. In fact, this shift toward integration reflects a broader trend that is seen in the India Regulatory Technology Business Report 2024–2029 findings, which highlight the growing adoption of compliance automation, AI, and machine learning in the Indian market. The report points to a future where GRC is driven by data, merging operations, technology, and control into a single, intelligent framework. ... An AI-first, connected GRC solution takes the heavy lifting out of compliance. Instead of juggling disconnected systems and endless updates, it brings everything together, from tracking regulations to automating actions to keeping teams aligned. For compliance teams, that means less manual work and more time to focus on what matters. ... A smart, integrated GRC solution brings everything into one place. It helps organizations run more smoothly by reducing errors and simplifying teamwork. It also means less time spent on admin and better use of people and resources where they are really needed.


The Importance of Information Sharing to Achieve Cybersecurity Resilience

Information sharing among different sectors predominantly revolves around threats related to phishing, vulnerabilities, ransomware, and data breaches. Each sector tailors its approach to cybersecurity information sharing based on regulatory and technological needs, carefully considering strategies that address specific risks and identify resolution requirements. However, for the mobile industry, information sharing relating to cyberattacks on the networks themselves and misuse of interconnection signalling are also the focus of significant sharing efforts. Industries learn from each other by adopting sector-specific frameworks and leveraging real-time data to enhance their cybersecurity posture. This includes real-time sharing of indicators of compromise (IoCs) and the techniques, tactics, and procedures (TTPs) associated with phishing campaigns. An example of this is the recently launched Stop Scams UK initiative, bringing together tech, telecoms and finance industry leaders, who are going to share real-time data on fraud indicators to enhance consumer protection and foster economic security. This is an important development, as without cross-industry information sharing, determining whether a cybersecurity attack campaign is sector-specific or indiscriminate becomes difficult. 

Daily Tech Digest - June 02, 2025


Quote for the day:

"The best way to predict the future is to create it." -- Peter Drucker


Doing nothing is still doing something

Here's the uncomfortable truth, doing nothing is still doing something – and very often, it's the wrong thing. We saw this play out at the start of the year when Donald Trump's likely return to the White House and the prospect of fresh tariffs sent ripples through global markets. Investors froze, and while the tariffs have been shelved (for now), the real damage had already been done – not to portfolios, but to behaviour. This is decision paralysis in action. And in my experience, it's most acute among entrepreneurs and high-net-worth individuals post-exit, many of whom are navigating wealth independently for the first time. It's human nature to crave certainty, especially when it comes to money, but if you're waiting for a time when everything is calm, clear, and safe before investing or making a financial decision, I've got bad news – that day is never going to arrive. Markets move, the political climate is noisy, the global economy is always in flux. If you're frozen by fear, your money isn't standing still – it's slipping backwards. ... Entrepreneurs are used to taking calculated risks, but when it comes to managing post-exit wealth or personal finances, many find themselves out of their depth. A little knowledge can be a dangerous thing – and half-understanding the tax system, the economy, or the markets can lead to costly mistakes.


The Future of Agile Isn’t ‘agile’

One reason is that agilists introduced too many conflicting and divergent approaches that fragmented the market. “Agile” meant so many things to different people that hiring managers could never predict what they were getting when a candidate’s resume indicated s/he was “experienced in agile development.” Another reason organizations failed to generate value with “agile” was that too many agile approaches focused on changing practices or culture while ignoring the larger delivery system in which the practices operate, reinforcing a culture that is resistant to change. This shouldn’t be a surprise to people following our industry, as my colleague and LeadingAgile CEO Mike Cottmeyer has been talking about why agile fails for over a decade, such as his Agile 2014 presentation, Why is Agile Failing in Large Enterprises… …and what you can do about it. The final reason that led “agile” to its current state of disfavor is that early in the agile movement there was too much money to be made in training and certifications. The industry’s focus on certifications had the effect over time of misaligning the goals of the methodology / training companies and their customers. “Train everyone. Launch trains” may be a short-term success pattern for a methodology purveyor, but it is ultimately unsustainable because the training and practices are too disconnected from tangible results senior executives need to compete and win in the market.


CIOs get serious about closing the skills gap — mainly from within

Staffing and talent issues are affecting CIOs’ ability to double down on strategic and innovation objectives, according to 54% of this year’s respondents. As a result, closing the skills gap has become a huge priority. “What’s driving it in some CIOs’ minds is tied back to their AI deployments,” says Mark Moccia, a vice president research director at Forrester. “They’re under a lot of cost pressure … to get the most out of AI deployments” to increase operational efficiencies and lower costs, he says. “It’s driving more of a need to close the skills gap and find people who have deployed AI successfully.” AI, generative AI, and cybersecurity top the list of skills gaps preventing organizations from achieving objectives, according to an April Gartner report. Nine out of 10 organizations have adopted or plan to adopt skills-based talent growth to address those challenges. ... The best approach, Karnati says, is developing talent from within. “We’re equipping our existing teams with the space, tools, and support needed to explore genAI through practical application, including rapid prototyping, internal hackathons, and proof-of-concept sprints,” Karnati says. “These aren’t just technical exercises — they’re structured opportunities for cross-functional learning, where engineers, product leads, and domain experts collaborate to test real use cases.”


The Critical Quantum Timeline: Where Are We Now And Where Are We Heading?

Technically, the term is fault-tolerant quantum computing. The qubits that quantum computers use to process data have to be kept in a delicate state – sometimes frozen to temperatures very close to absolute zero – in order to stay stable and not “decohere”. Keeping them in this state for longer periods of time requires large amounts of energy but is necessary for more complex calculations. Recent research by Google, among others, is pointing the way towards developing more robust and resilient quantum methods. ... One of the most exciting prospects ahead of us involves applying quantum computing to AI. Firstly, many AI algorithms involve solving the types of problems that quantum computers excel at, such as optimization problems. Secondly, with its ability to more accurately simulate and model the physical world, it will generate huge amounts of synthetic data. ... Looking beyond the next two decades, quantum computing will be changing the world in ways we can’t even imagine yet, just as the leap to transistors and microchips enabled the digital world and the internet of today. It will tackle currently impossible problems, help us create fantastic new materials with amazing properties and medicines that affect our bodies in new ways, and help us tackle huge problems like climate change and cleaning the oceans.


6 hard truths security pros must learn to live with

Every technological leap will be used against you - Information technology is a discipline built largely on rapid advances. Some of these technological leaps can help improve your ability to secure the enterprise. But every last one of them brings new challenges from a security perspective, not the least of which is how they will be used to attack your systems, networks, and data. ... No matter how good you are, your organization will be victimized - This is a hard one to swallow, but if we take the “five stages of grief” approach to cybersecurity, it’s better to reach the “acceptance” level than to remain in denial because much of what happens is simply out of your control. A global survey of 1,309 IT and security professionals found that 79% of organizations suffered a cyberattack within the past 12 months, up from 68% just a year ago, according to cybersecurity vendor Netwrix’s Hybrid Security Trends Report. ... Breach blame will fall on you — and the fallout could include personal liability - As if getting victimized by a security breach isn’t enough, new Securities and Exchange Commission (SEC) rules put CISOs in the crosshairs for potential criminal prosecution. The new rules, which went into effect in 2023, require publicly listed companies to report any material cybersecurity incident within four business days.


Are you an A(I)ction man?

Whilst individually AI-generated action figures have a small impact - a drop in the ocean you could say - trends like this exemplify how easy it is to use AI en masse, and collectively create an ocean of demand. Seeing the number of individuals, even those with knowledge of AI’s lofty resource consumption, partaking in the creation of these avatars, makes me wonder if we need greater awareness of the collective impact of GenAI. Now, I want to take a moment to clarify this is not a criticism of those producing AI-generated content, or of anyone who has taken part in the ‘action figure’ trend. I’ve certainly had many goes with DALL-E for fun, and taken part in various trends in my time, but the volume of these recent images caught my attention. Many of the conversations I had at Connect New York a few weeks ago addressed sustainability and the need for industry collaboration, but perhaps we should also be instilling more awareness from an end-user point of view. After all, ChatGPT, according to the Washington Post, consumes 39.8 million kWh per day. I’d be fascinated to see the full picture of power and water consumption from the AI-generated action figures. Whilst it will only account for a tiny fraction of overall demand, these drops can have a tendency to accumulate. 


The MVP Dilemma: Scale Now or Scale Later?

Teams often have few concrete requirements about scalability. The business may not be a reliable source of information but, as we noted above, they do have a business case that has implicit scalability needs. It’s easy for teams to focus on functional needs, early on, and ignore these implicit scaling requirements. They may hope that scaling won’t be a problem or that they can solve the problem by throwing more computing resources at it. They have a legitimate concern about overbuilding and increasing costs, but hoping that scaling problems won't happen is not a good scaling strategy. Teams need to consider scaling from the start. ... The MVP often has implicit scalability requirements, such as "in order for this idea to be successful we need to recruit ten thousand new customers". Asking the right questions and engaging in collaborative dialogue can often uncover these. Often these relate to success criteria for the MVP experiment. ... Some people see asynchronous communication as another scaling panacea because it allows work to proceed independently of the task that initiated the work. The theory is that the main task can do other things while work is happening in the background. So long as the initiating task does not, at some point, need the results of the asynchronous task to proceed, asynchronous processing can help a system to scale. 


Data Integrity: What It Is and Why It Matters

By contrast, data quality builds on methods for confirming the integrity of the data and also considers the data’s uniqueness, timeliness, accuracy, and consistency. Data is considered “high quality” when it ranks high in all these areas based on the assessment of data analysts. High-quality data is considered trustworthy and reliable for its intended applications based on the organization’s data validation rules. The benefits of data integrity and data quality are distinct, despite some overlap. Data integrity allows a business to recover quickly and completely in the event of a system failure, prevent unauthorized access to or modification of the data, and support the company’s compliance efforts. By confirming the quality of their data, businesses improve the efficiency of their data operations, increase the value of their data, and enhance collaboration and decision-making. Data Quality efforts also help companies reduce their costs, enhance employee productivity, and establish closer relationships with their customers. Implementing a data integrity strategy begins by identifying the sources of potential data corruption in your organization. These include human error, system malfunctions, unauthorized access, failure to validate and test, and lack of Governance. A data integrity plan operates at both the database level and business level.


Backup-as-a-service explained: Your guide to cloud data protection

With BaaS, enterprises have quick, easy access to their data. Providers store multiple copies of backups in different locations so that data can be recovered when lost due to outages, failures or accidental deletion. BaaS also features geographic distribution and automatic failover, when data handling is automatically moved to a different server or system in the event of an incident to ensure that it is safe and readily available. ... With BaaS, the provider uses its own cloud infrastructure and expertise to handle the entire backup and restoration process. Enterprises simply connect to the backup engine, set their preferences and the platform handles file transfer, encryption and maintenance. Automation is the engine that drives BaaS, helping ensure that data is continuously backed up without slowing down network performance or interrupting day-to-day work. Enterprises first select the data they need backed up — whether it be simple files or complex apps — backup frequency and data retention times. ... Enterprises shouldn’t just jump right into BaaS — proper preparation is critical. Firstly, it is important to define a backup policy that identifies the organization’s critical data that must be backed up. This policy should also include backup frequency, storage location and how long copies should be retained.


CISO 3.0: Leading AI governance and security in the boardroom

AI is expanding the CISO’s required skillset beyond cybersecurity to include fluency in data science, machine learning fundamentals, and understanding how to evaluate AI models – not just technically, but from a governance and risk perspective. Understanding how AI works and how to use it responsibly is essential. Fortunately, AI has also evolved how we train our teams. For example, adaptive learning platforms that personalize content and simulate real-world scenarios are assisting in closing the skills gap more effectively. Ultimately, to become successful in the AI space, both CISOs and their teams will need to grasp how AI models are trained, the data they rely on, and the risks they may introduce. CISOs should always prioritize accountability and transparency. Red flags to look out for include a lack of explainability or insufficient auditing capabilities, both of which leave companies vulnerable. It’s important to understand how it handles sensitive data, and whether it has proven success in similar environments. Beyond that, it’s also vital to evaluate how well the tool aligns with your governance model, that it can be audited, and that it integrates well into your existing systems. Lastly, overpromising capabilities or providing an unclear roadmap for support are signs to proceed with caution.