Showing posts with label artificial intellgence. Show all posts
Showing posts with label artificial intellgence. Show all posts

Daily Tech Digest - October 19, 2025


; Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How CIOs Can Close the IT Workforce Skills Gap for an AI-First Organization

Deliberately building AI skills among existing talent, rather than searching outside the organization for new hires or leaving skills development to chance, can help develop the desired institutional knowledge and build an IT-resilient workforce. AI-first is a strategic approach that guides the use of AI technology within an enterprise or a unit within it, with the intention of maximizing the benefits from AI. IT organizations must maintain ongoing skills development to be successful as an AI-first organization. ... In developing the future-state competency map, CIOs must include AI-specific skills and competencies, ensuring each role has measurable expectations aligned with the company’s strategic objectives related to AI. CIO must also partner with HR to design and establish AI literacy programs. While HR leaders are experts in scaling learning initiatives and standardizing tools, CIOs have more insight into foundational AI skills, training, and technical support required in the enterprise. CIOs should regularly review whether their teams’ AI capabilities contribute to faster product launches or improved customer insights. ... Addressing employees’ key concerns is a critical step for any AI change management initiative to be successful. AI is fundamentally changing traditional workplace operating models by democratizing access to technology, generating insights, and changing the relationship between people and technology.


20 Strategies To Strengthen Your Crisis Management Playbook

The regular review and refinement of protocols ensures alignment when a scenario arises. At our company, we centralize contacts, prepare for a range of scenarios and set outreach guidelines. This enables rapid response, timely updates and meaningful support, which safeguards trust and strengthens relationships with employees, stakeholders and clients. ... Unintended consequences often arise when stakeholder expectations are left out of crisis planning. Leaders should bake audience insights into their playbooks early—not after headlines hit. Anticipating concerns builds trust and gives you the clarity and credibility to lead through the tough moments. ... Know when to do nothing. Sometimes the instinct to respond immediately leads to increased confusion and puts your brand even further under the microscope. The best crisis managers know when to stop, see how things play out and respond accordingly (if at all), all while preparing for a variety of scenarios behind the scenes. ... Act like a board of directors. A crisis is not an event; it's a stress test of brand, enterprise and reputation infrastructure and resilience. Crisis plans must align with business continuity, incident response and disaster recovery plans. Marketing and communications must co-lead with the exec team, legal, ops and regulatory to guide action before commercial, brand equity and reputation risk escalates.


Abstract or die: Why AI enterprises can't afford rigid vector stacks

Without portability, organizations stagnate. They have technical debt from recursive code paths, are hesitant to adopt new technology and cannot move prototypes to production at pace. In effect, the database is a bottleneck rather than an accelerator. Portability, or the ability to move underlying infrastructure without re-encoding the application, is ever more a strategic requirement for enterprises rolling out AI at scale. ... Instead of having application code directly bound to some specific vector backend, companies can compile against an abstraction layer that normalizes operations like inserts, queries and filtering. This doesn't necessarily eliminate the need to choose a backend; it makes that choice less rigid. Development teams can start with DuckDB or SQLite in the lab, then scale up to Postgres or MySQL for production and ultimately adopt a special-purpose cloud vector DB without having to re-architect the application. ... What's happening in the vector space is one example of a bigger trend: Open-source abstractions as critical infrastructure; In data formats: Apache Arrow; In ML models: ONNX; In orchestration: Kubernetes; In AI APIs: Any-LLM and other such frameworks. These projects succeed, not by adding new capability, but by removing friction. They enable enterprises to move more quickly, hedge bets and evolve along with the ecosystem. Vector DB adapters continue this legacy, transforming a high-speed, fragmented space into infrastructure that enterprises can truly depend on. ...


AWS's New Security VP: A Turning Point for AI Cybersecurity Leadership?

"As we move forward into 2026, the breadth and depth of AI opportunities, products, and threats globally present a paradigm shift in cyber defense," Lohrmann said. He added that he was encouraged by AWS's recognition of the need for additional focus and attention on these cyberthreats. ... "Agentic AI attackers can now operate with a 'reflection loop' so they are effectively self-learning from failed attacks and modifying their attack approach automatically," said Simon Ratcliffe, fractional CIO at Freeman Clarke. "This means the attacks are faster and there are more of them … putting overwhelming pressure on CISOs to respond." ... "I think the CISO's role will evolve to meet the broader governance ecosystem, bringing together AI security specialists, data scientists, compliance officers, and ethics leads," she said, adding cybersecurity's mantra that AI security is everyone's business. "But it demands dedicated expertise," she said. "Going forward, I hope that organizations treat AI governance and assurance as integral parts of cybersecurity, not siloed add-ons." ... In Liebig's opinion, the future of cybersecurity leadership looks less hierarchical than it does now. "As for who owns that risk, I believe the CISO remains accountable, but new roles are emerging to operationalize AI integrity -- model risk officers, AI security architects, and governance engineers," he explained. "The CISO's role should expand horizontally, ensuring AI aligns to enterprise trust frameworks, not stand apart from them."


The Top 5 Technology Trends For 2026

In recent years, we've seen industry, governments, education and everyday folk scrambling to adapt to the disruptive impact of AI. But by 2026, we're starting to get answers to some of the big questions around its effect on jobs, business and day-to-day life. Now, the focus shifts from simply reacting to reinventing and reshaping in order to find our place in this brave, different and sometimes frightening new world.  ... Rather than simply answering questions and generating content, agents take action on our behalf, and in 2026, this will become an increasingly frequent and normal occurrence in everyday life. From automating business decision-making to managing and coordinating hectic family schedules, AI agents will handle the “busy work” involved in planning and problem-solving, freeing us up to focus on the big picture or simply slowing down and enjoying life. ... Quantum computing harnesses the strange and seemingly counterintuitive behavior of particles at the sub-atomic level to accomplish many complex computing tasks millions of times faster than "classic" computers. For the last decade, there's been excitement and hype over their performance in labs and research environments, but in 2026, we are likely to see further adoption in the real world. While this trend might not appear to noticeably affect us in our day-to-day lives, the impact on business, industry and science will begin to take shape in noticeable ways.


How Successful CTOs Orchestrate Business Results at Every Stage

As companies mature, their technical needs shift from building for the present to a long-term vision, strategic partnerships, and leveraging technology to drive business goals. The Strategist CTO combines deep technical acumen with business acumen and a deep understanding of the customer journey. This leader collaborates with other executives on strategic planning, but always through the lens of where customers are heading, not strictly where technology is going.  ... For large enterprises with complex ecosystems and large customer bases, stability, security, and operational efficiency are paramount. This is where the Guardian CTO safeguards the customer experience through technical excellence.This leader oversees all aspects of technical infrastructure, ensuring the reliability, security, and availability of core technology assets with a clear understanding that every decision directly impacts customer trust. ... While these operational models often align with company growth stages, they aren't rigid. A company's needs can shift rapidly due to market conditions, competitive pressures, or unexpected challenges, and customer expectations can evolve just as quickly. ... The most successful companies create environments where technical leadership evolves in response to changing business needs, empowering technical leaders to pivot their focus from building to strategizing, or from innovating to safeguarding, as circumstances demand.


Financial services seek balance of trust, inclusion through face biometrics advances

Advances in the flexibility of face biometric liveness, deepfake detection and cross-sectoral collaboration represent the latest measures against fraud in remote financial services. A digital bank in the Philippines is integrating iProov’s face biometrics and liveness detection, OneConnect and a partner are entering a sandbox to work on protecting against deepfakes, and an event held by Facephi in Mexico explored the challenges of financial services trying to maintain digital trust while advancing inclusion. ... The Philippine digital bank will deploy advanced liveness detection tools as part of a new risk-based authentication strategy. “Our mission is to uplift the lives of all Filipinos through a secure, trusted, and accessible digital bank for all Filipinos, and that requires deploying resilient infrastructure capable of addressing sophisticated fraud,” said Russell Hernandez, chief information security officer at UnionDigital Bank. “As we shift toward risk-based authentication, we need a flexible and future-ready solution. iProov’s internationally proven ability to deliver ease of use, speed, and high security assurance – backed by reliable vendor support – ensures we can evolve our fraud defenses while sustaining customer trust and confidence.” ... The Mexican government has launched several initiatives to standardize digital identity infrastructure, including Llave MX — a single sign-on platform for public services — and the forthcoming National Digital Identity Document, designed to harmonize verification across sectors.


Why context, not just data, will define the future of AI in finance

Raw intelligence in AI and its ability to crunch numbers and process data is only one part of the equation. What it fundamentally lacks is wisdom, which comes from context. In areas like personal finance, building powerful models with deep domain knowledge is critical. The challenges range from misinterpretation of data to regulatory oversights that directly affect value for customers. That’s why at Intuit, we put “context at the core of AI.” This means moving beyond generic datasets to build specialised Financial Large Language Models (LLMs) trained on decades of anonymised financial expertise. It’s about understanding the interconnected journey of our customers across our ecosystem—from the freelancer managing invoices in QuickBooks to that same individual filing taxes with TurboTax, to them monitoring their financial health on Credit Karma. ... In the age of GenAI, craftsmanship in engineering is being redefined. It’s no longer just about writing every line of code or building models from scratch, but about architecting robust, extensible systems that empower others to innovate. The very soul of engineering is transcending code to become the art of architecture. The measure of excellence is no longer found in the meticulous construction of every model, but in the visionary design of systems that empower domain experts to innovate. With tools like GenStudio and GenUX abstracting complexity, the engineer’s role isn’t diminished but elevated. They evolve from builders of applications to architects of innovation ecosystems. 


The modernization mirage: CIOs must see through it to play the long game

Enterprise architecture, in too many organizations, has been reduced to frameworks: TOGAF, Zachman, FEAF. These models provide structure but rarely move capital or inspire investor trust. Boards don’t want frameworks. They want influence. That’s why I developed the Architecture Influence Flywheel — a practical model I use in board and transformation discussions. It rests on three pivots - Outcomes: Every architectural choice must tie directly to board-level priorities — growth, resilience, efficiency. ... Relationships: CIOs must serve as business-technology translators. Express progress not in technical jargon, but in investor language — return on capital, return on innovation, margin expansion and risk mitigation. ... Visible wins: Influence grows through undeniable demonstrations. A system that cuts onboarding time by 40%, an AI model that reduces fraud losses or an audit process that clears in half the time — these visible wins build momentum. ... Technologies rise and fall. Frameworks evolve. Titles shift. But one principle endures: What leaders tolerate defines their legacy. Playing the long game requires CIOs to ask uncomfortable questions:Will we tolerate AI models we cannot explain to regulators? Will we tolerate unchecked cloud sprawl without financial discipline? Will we tolerate compliance as a box-ticking exercise rather than a growth enabler? 


What Is Cybersecurity Platformization?

Cybersecurity platformization is a strategic response to this complexity. It’s the move from a collection of disparate point solutions to a single, unified platform that integrates multiple security functions. Dickson describes it as the “canned integration of security tools so that they work together holistically to make the installation, maintenance and operation easier for the end customer across various tools in the security stack.” ... The most significant hidden cost of a fragmented, multitool security strategy is labor. Managing disconnected tools is a resource strain on an organization, as it requires individuals with specialized skills for each tool. This includes the labor-intensive task of managing API integrations and manually coding “shims,” or integrations to translate data between different tools, which often have separate protocols and proprietary interfaces, Dukes says. Beyond the cost of personnel, there’s the operational complexity.  ... One of the most immediate benefits of adopting a platform approach is cost reduction. This includes not only the reduction in licensing fees but also a reduction in the operational complexity and the number of specialized employees needed. ... Another key benefit is the well-worn concept of a “single pane of glass,” a single dashboard that enables IT security teams to have easier management and reporting. Instead of multiple tools with different interfaces and data formats, a unified platform streamlines everything into a single, cohesive view.

Daily Tech Digest - March 09, 2025


Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor


Software Development Teams Struggle as Security Debt Reaches Critical Levels

Software development teams face mounting challenges as security vulnerabilities pile up faster than they can be fixed. That's the key finding of Veracode's 15th annual State of Software Security (SoSS) report. ... According to Wysopal, several factors contribute to this prolonged remediation timeline. Growing Codebases and Complexity: As applications become larger and incorporate more third-party components, the scope for potential flaws increases, making it more time-consuming to isolate and remediate issues. Shifting Priorities: Many teams are under pressure to roll out new features rapidly. Security fixes are often deprioritized unless they are absolutely critical. Distributed Architectures: Modern microservices and container-based deployments can fragment responsibility and visibility. Coordinating fixes across multiple teams prolongs remediation. Shortage of Skilled AppSec Staff: Finding developers or security specialists with both security expertise and domain knowledge is challenging. Limited capacity can push out or delay fix timelines. ... "Many are using AI to speed up development processes and write code, which presents great risk," Wysopal said. "AI-generated code can introduce more flaws at greater velocity, unless they are thoroughly reviewed."


Want to win in the age of AI? You can either build it or build your business with it

From a business perspective, generative AI cannot operate in a technical vacuum -- AI-savvy subject matter experts are needed to adapt the technology to specific business requirements -- that's the domain expertise career track. "As AI models become more commoditized, specialized domain knowledge becomes increasingly valuable," Challapally said. "What sets true experts apart is their deep understanding of their specific industry combined with the ability to identify where and how gen AI can be effectively applied within it." Often, he warned, bots alone cannot relay such specific knowledge. ... Business leaders cite the most intense need at this time "is for professionals who bridge both worlds -- those who deeply understand business requirements while also grasping the technical fundamentals of AI," he said. Rather than pure technologists, they seek individuals who combine traditional business acumen with technical literacy. These are the type of people who can craft product visions, understand basic coding concepts, and gather sophisticated requirements that align technology capabilities with business goals." For those on the technical side, it's important "to master the art of prompting these tools to deliver accurate results," said Challapally. 


Cyber Resilience Needs an Innovative Approach: Streamlining Incident Response for The Future

Incident response has historically been a reactive process, often hampered by time-consuming manual procedures and a lack of historical and real-time visibility. When a breach is detected, security teams scramble to piece together what happened, often working with fragmented information from multiple sources. This approach is not only slow but also prone to errors, leading to extended downtime, increased costs, and sometimes, the loss of crucial data. ... The quicker an enterprise or MSSP organization can respond to an incident, the lower the risk of disruption and the less damage it incurs. An innovative approach that automates and streamlines the collection and analysis of data in near real-time during a breach allows security teams to quickly understand the scope and impact, enabling faster decision-making and minimizing downtime. ... Automation reduces the risk of human error, which is often a significant factor in traditional incident response processes – riddled with fragmented methodologies. By centralizing and correlating data from multiple sources, an automated investgation system provides a more accurate, consistent and comprehensive view of the incident, leading to better informed, more effective containment and remediation efforts.


Data Is Risky Business: Is Data Governance Failing? Or Are We Failing Data Governance?

“Data governance” has become synonymous in some areas of academic study and industry publication with the development of legislation, regulation, and standards setting out rules and common requirements for how data should be processed or put to use. It is also still considered synonymous with or a sub-category of IT Governance in much of the academic literature. And let’s not forget our friends in records and information management and their offshoot of data governance. ... While there is extensive discussion in academia and in practitioner literature about the need for people to lead on data and the importance of people performing data stewardship-type roles, there is nothing that has dug deeper to identify what we mean by “the right people.” ... In the organizations of today, however, we are dealing with business leadership and technology leadership for whom these topics simply did not exist when they were engaged in study before entering the workforce. Therefore, they operate within the mode of thinking and apply the mental models that were taught to them, or which have dominated the cultures of the organizations where they have cut their teeth and the roles they have had as they moved from entry-level to management functions to leadership roles.


How CISOs Will Navigate The Threat Landscape Differently In 2025

In 2025, resilience is the cornerstone of effective cybersecurity. The shift from a defensive mindset to a proactive approach is evident in strategies such as advanced attack surface analytics, continuous threat modeling and offensive security testing. I’ve seen many penetration testing as a service (PTaaS) providers place an emphasis on integrating continuous penetration testing with attack surface management (ASM) as an example of how organizations can stay one step ahead of adversaries. Organizations using continuous pentesting reported 30% fewer breaches in 2024 compared to those relying solely on annual assessments, showcasing the value of a proactive approach. The adoption of cybersecurity frameworks such as NIST and ISO 27001 provides a structured approach to managing risks, but these frameworks must be tailored to the unique needs of each enterprise. For example, enterprises operating in regulated industries such as healthcare, finance and critical infrastructure must prioritize compliance while addressing sector-specific vulnerabilities. CISOs are focusing on data-driven decision making to quantify risks and justify investments. By tying cybersecurity initiatives to financial outcomes, such as reduced downtime and lower breach costs, CISOs can secure buy-in from stakeholders and ensure long-term sustainability.


AI and Automation: Key Pillars for Building Cyber Resilience

AI is now moving from training to inference, helping you quickly make sense of or create a plan from the information you have. This is made possible based on improvements to how AI understands massive amounts of semi-structured data. New AI can figure out the signal from the noise, a critical step in framing the cyber resilience problem. The power of AI as a programming language combined with its ability to ingest semi-structured data opens up a new world of network operations use cases. AI becomes an intelligent helpline, using the criteria you feed it to provide guidance to troubleshoot, remediate, or resolve a network security or availability problem. You get a resolution in hours or days – not the weeks or months it would have taken to do it manually. ... AI is not the same as automation; instead, it enhances automation by significantly speeding up iteration, learning, and problem-solving processes. New AI allows you to understand the entire scope of a problem before you automate and then automate strategically. Instead of learning on the job – when you have a cyber resilience challenge, and the clock is ticking – you improve your chances of getting it right the first time. As the effectiveness of network automation increases, so too will its adoption. 


Adaptive Cybersecurity: Strategies for a Resilient Cyberspace

We are led to consider ‘systems thinking’ to address cyber risk. This approach examines how all the systems we oversee interact on a larger scale, uncovering valuable insights to quantify and mitigate cyber risk. This perspective encourages a paradigm shift and rethinking of traditional risk management practices, emphasizing the need for a more integrated and holistic approach. The evolving and sophisticated cyber risk has heightened both awareness and expectations around cybersecurity. Nowadays, businesses are being evaluated based on their preparedness, resilience and how effectively they respond to cyber risk. Moreover, it's crucial for companies to understand their disclosure obligations across market and industry levels. Consequently, regulators and investors demand that boards prioritize cybersecurity through strong governance. ... The CISO's role has evolved to include viewing cybersecurity not merely as an IT issue but as a strategic and business risk. This shift demands that CISOs possess a combination of technical expertise and strong communication skills, enabling them to bridge the gap between technology and business leaders. They should leverage predictive analytics or AI-based threat detection tools to proactively manage emerging cyber risks. 


Choosing Manual or Auto-Instrumentation for Mobile Observability

Mobile apps run on specific devices and operating systems, which means that certain operations are standard across every app instance. For example, in an iOS app built on UIKit, the didFinishLaunchingWithOptions method informs the app developer that a freshly launched app is almost ready to run. Listening for this method in any app would in turn let you observe and learn more about the completion of app launch automatically. Quick, out-of-the-box instrumentation like this is easy to use. By importing an auto-instrumentation library to your app, you can hook into the activity of your application without writing custom code. Using auto-instrumentation provides standardized signals for actions that should be recognized in a prescribed way. You could listen for app launch, as described above, but also for the loading of views, for the beginning and ends of network requests, crashes and so on. Observability would be great if imported libraries did all the work. ... However, making sense of your mobile app requires more than just monitoring the ubiquitous signals of mobile app development. For one, mobile telemetry collection and transmission can be limited by the operating system that the app user chooses, which is not designed to transmit every signal of its own. 


Planning ahead around data migrations

Understanding the full inventory of components involved in the data migration is crucial. However, it is equally essential to have a clearly defined target and to communicate this target to all stakeholders. This includes outlining the potential implications of the migration for each stakeholder. The impact of the migration will vary significantly depending on the nature of the project. For example, a simple infrastructure refresh will have a much smaller impact than a complete overhaul of the database technology. In the case of an infrastructure refresh, the primary impact might be a brief period of downtime while the new hardware is installed and the data is transferred. Stakeholders may need to adjust their workflows to accommodate this downtime, but the overall impact on their day-to-day operations should be minimal. On the other hand, a complete change of database technology could have far-reaching implications. Stakeholders may need to learn new skills to interact with the new database, and existing applications may need to be modified or even completely rewritten to be compatible with the new technology. This could result in a significant investment of time and resources, and there may be a period of adjustment while everyone gets used to the new system.


Your AI coder is writing tomorrow’s technical debt

With AI, this problem gets exponentially worse. Let’s say a machine writes a million lines of code – it can hold all of that in its head and figure things out. But a human? Even if you wanted to address a problem, you couldn’t do so. It’s impossible to sift through all that amount of code you’ve never seen before just to find where the problem might be. In our case, what made it particularly tricky was that the AI-generated code had these very subtle logical flaws: not even syntactic issues, just small problems in the execution logic that you wouldn’t notice at a glance. The volume of technical debt increases not just because of complexity, but simply because of the sheer amount of code being shipped. It’s a natural law. Even as humans, if you ship more code, you will have more bugs and you will have more debt. If you are exponentially increasing the amount of code you’re shipping with AI, then yes, maybe you catch some issues during review, but what slips through just gets shipped. The volume itself becomes the problem. ... the solution lies in far better communication throughout the whole organisation, coupled with robust processes and tooling. ... The tooling side is equally important. We’ve customised our AI tools’ settings to align with our tech stack and standards. Things like prompt templates that enforce our coding style, pre-configured with our preferred libraries and frameworks. 

Daily Tech Digest - February 19, 2025


Quote for the day:

"Go confidently in the direction of your dreams. Live the life you have imagined." -– Henry David Thoreau


Why Observability Needs To Go Headless

Not all logs have long-term value, but that’s one of the advantages of headless observability and decoupled storage. Teams have the freedom and flexibility to determine which logs should be retained for longer periods. Web application firewall (WAF) and other security logs can be retained over the long term and made available to cybersecurity teams and threat hunters. Other application logs can provide long-term insights into how resources are being used for capacity planning and anomaly detection. Let’s take a closer look at a real, tangible use case where observability data can be valuable for other teams: real user monitoring (RUM). In the realm of observability, RUM allows teams to proactively monitor how end users are experiencing web applications. Issues like slow page loads can be mitigated before they frustrate users. Beyond observability, RUM data can also provide insights into how your end users are interacting with your brand and your products. This data is invaluable for marketing, advertising and leadership teams that need to plan strategy. ... As a real-world example, many enterprises use CDN log data for real user monitoring. In the short term, monitoring CDNs is important for ensuring good user experiences and fast loading times of digital assets. However, being able to retain huge volumes of log data long term and cost-effectively provides certain advantages to enterprises.


Why the CIO role should be split in two

The fact is that within enterprises, existing architecture is overly complex, often including new digital systems interconnected with legacy systems. This ‘hybrid’ architecture is a combination of best and bad practice. When there is an outage, the new digital platforms can invariably be restored to recover business process support. But because they do not operate in isolation, instead connecting with legacy technologies, business operations themselves may not fully recover if the legacy systems continue to be impacted by the outage. For most enterprises stuck in this hybrid state, the way forward is to be more discipline around architecture. ... Simplifying architecture at an enterprise level is something the CIO and CISO should work together concurrently as a shared goal. The benefits of doing so will accrue over time rather than immediately, hence there can be some reluctance to prioritize. ... What does all this have to do with my opening discussion about the CIO and complementary IT executive roles? Splitting the CIO role into smaller and smaller pieces would be okay if doing so led to better outcomes. But I would argue that examples like the ones above show that the multiple-exec approach is not a success story we should be bragging about. In this structure, the two CIOs would share ownership of the IT strategy. 


Generative AI vs. the software developer

AI is not going to turn your customer support people (Elvis bless them) into senior software developers. A customer support person might be able to think “I need to track the connection between items in inventory, the customer’s shopping cart, and the discount pricing for a given item,” but unless that person also knows how to code, they will have a seriously hard time instructing an AI model to generate the code they need. Most likely, they aren’t going to know if the code the AI produces even runs, let alone works correctly. But AI can help actual developers in many ways. It can look at existing code you have written and help you produce the next thing that you need to write. It can even write large routines and classes that you ask it to. But it is not going to create the things you need without you having a large say in what that is. You need to know how to craft a prompt to get precisely what is needed. ... Now, that prompt will be pretty effective in getting what is asked for. But the trick here, obviously, is that you have to know what a React component is, what Tailwind is, the fact that you want tests, what TypeScript is, what null is, and that you’d even need to handle missing values. There is a lot of knowledge and experience wrapped up in that prompt, and it’s not something that an inexperienced developer, or certainly a non-developer, would be able to write.


Beyond the Screen: Humanising Digital Learning

Digital learning holds a lot of promise, aiming to bring the most dynamic and engaging elements of in-person training into the digital space. Interactive tools like quizzes, breakout rooms, and mini-tasks demonstrate just how far we’ve come in replicating real-world engagement online. However, we continue to see issues with retention and follow through. Recent research shows that 66% of employees still find on-the-job learning to be more effective than formal online courses. This disconnect often stems from a lack of deep, meaningful engagement. Without it, employees are less likely to retain knowledge or apply their skills effectively in the workplace. This is particularly crucial when it comes to human skills—broader soft skills like communication, emotional intelligence, and critical thinking. Unlike technical skills that are typically learned ‘by the book’, softer skills are learned and applied every day. The solution lies in moving beyond passive consumption to real-world, interactive learning simulations. ... The shift to digital learning offers incredible potential, but realising that potential requires a thoughtful approach. By embracing AI-powered technologies and prioritising interactive, personalised and bite-sized content, organisations can create learning experiences that are engaging, practical and transformative.


Shadow AI: How unapproved AI apps are compromising security, and what you can do about it

Shadow AI introduces significant risks, including accidental data breaches, compliance violations and reputational damage. It’s the digital steroid that allows those using it to get more detailed work done in less time, often beating deadlines. Entire departments have shadow AI apps they use to squeeze more productivity into fewer hours. “I see this every week,” Vineet Arora, CTO at WinWire, recently told VentureBeat. “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore.” ... “If you paste source code or financial data, it effectively lives inside that model,” Golan warned. Arora and Golan find companies training public models defaulting to using shadow AI apps for a wide variety of complex tasks. Once proprietary data gets into a public-domain model, more significant challenges begin for any organization. It’s especially challenging for publicly held organizations that often have significant compliance and regulatory requirements. Golan pointed to the coming EU AI Act, which “could dwarf even the GDPR in fines,” and warns that regulated sectors in the U.S. risk penalties if private data flows into unapproved AI tools. There’s also the risk of runtime vulnerabilities and prompt injection attacks that traditional endpoint security and data loss prevention (DLP) systems and platforms aren’t designed to detect and stop.


Think being CISO of a cybersecurity vendor is easy? Think again

When people in this industry hear that a CISO is working at a cybersecurity vendor, it can trigger a number of assumptions — many of them misguided. There’s a stereotype that the role isn’t “real” CISO work, that it’s more akin to being a field CISO, someone primarily outward-facing and focused on supporting sales or amplifying the brand. The assumption goes something like this: How hard can it be to secure a security company, and isn’t the “real” work done at companies outside of this bubble? ... Some might think that working at a security company limits your perspective of what’s out there in the broader industry, but I found the opposite to be true. I gained a deeper understanding of how organizations evaluate security solutions and what they truly care about. I saw firsthand the challenges customers faced when implementing security tools, and that experience gave me empathy, insight, and a renewed ability to speak their language. Now that I’m back in industry, I’m bringing that perspective with me. The transition wasn’t a step “down” or a shift away from anything; it was just the next phase in my career. Security leadership is security leadership, no matter where you practice it. The challenges remain complex, the responsibilities remain vast, and the importance of aligning security with business outcomes remains paramount.


Lack of regulations, oversight in health care IT can cause harm

Increasingly, health care organizations have outsourced their health IT infrastructure to companies owned and operated by private equity, venture capital and Big Tech firms that view them as platforms to experiment with unproven AI and machine-learning tools. "The unregulated integration of AI tools into these systems will make it even harder to protect patients' rights," Appelbaum said. "Moreover, because these records contain so much information and are centralized, they are among the most lucrative targets for cyberattacks and hackers," Batt said, noting that in 2024, data breaches exposed the health records of more than 200 million Americans. As a result, health care organizations must now invest billions more in cybersecurity systems owned and operated by venture capital, private equity and Big Tech. The authors argue that the federal government is once again behind in setting safeguards for the adoption of new health IT, and that the lessons from 30 years of attempts to set adequate standards for information-sharing in electronic health systems—as detailed in these reports—should spur regulators to act quickly and rein in unregulated financial activities in health IT. Batt explained, "The history of the health IT implementation and the lack of sufficient regulatory oversight and enforcement of standards should give us great pause for the current enthusiasm over the adoption of AI and machine learning in health information systems."


The Future of Data: How Decision Intelligence is Revolutionizing Data

Decision Intelligence is an interdisciplinary field that uses AI to enhance all aspects of decision-making across all areas of a Business. It blends concepts of Data Science (statistics, machine learning, AI, analytics) with Behavioral Sciences (psychology, neuroscience, economics, and managerial sciences) to understand how decisions are made and how outcomes are measured. ... Decision Intelligence (DI) can be considered a subset where it uses AI to build a reliable data foundation by collecting, organizing, and connecting data and then applying AI and analytics to turn that data into useful insights for better decision-making. In short, while AI provides the technology to mimic human intelligence, DI focuses on applying that technology to improve how decisions are made. ... You can use any of your machine learning models, like regression models, classification models, time series forecasting models, clustering algorithms, or reinforcement learning for implementing Decision Intelligence. These machine learning will help identify patterns in the data and make predictions based on those patterns, but decision intelligence will take that information one step further by incorporating it into a broader framework that can actively guide the decision-making process by considering the predictions and the potential outcomes and consequences of different choices.


ManpowerGroup exec explains how to manage an AI workforce

It’s not just a technology anymore. We are looking for individuals that have the industry experience. We can take somebody with industry experience and train them on the technical part of the job. “It’s a lot harder for us to take somebody with the technical skills and teach them how the industry works. I think there’s a focus on looking at the soft skills: the problem solving, the complex reasoning ability, and communications. Because it’s not just developing AI for the sake of software technology; it’s to address that larger business problem. It’s about looking at all of the business functions, and taking all of that into consideration. ... The problem is [that] the gap is getting wider between those employees who understand AI technology and are willing to learn more about it and those who don’t want to have anything to do with it. But I think everybody will be a technologist, eventually. It’s going to be talent augmented by technology. ... “There are so many things, and it’s happening so fast. So, we are still learning as fast as we can. We’re trying to understand what the impact of AI will be, and how it will change our business models. Even from a talent organization like ours, which is providing global talent solutions, what does that do for us? Now, our company is going to start looking for your talent plus the AI agents you’ll need. So AI becomes part of a hiring solution. 


Debunking the AI Hype: Inside Real Hacker Tactics

While headlines are trumpeting AI as the one-size-fits-all new secret weapon for cybercriminals, the statistics—again, so far—are telling a very different story. In fact, after poring over the data, Picus Labs found no meaningful upswing in AI-based tactics in 2024. Yes, adversaries have started incorporating AI for efficiency gains, such as crafting more credible phishing emails or creating/ debugging malicious code, but they haven't yet tapped AI's transformational power in the vast majority of their attacks so far. In fact, the data from the Red Report 2025 shows that you can still thwart the majority of attacks by focusing on tried-and-true TTPs. ... Attackers are increasingly targeting password stores, browser-stored credentials, and cached logins, leveraging stolen keys to escalate privileges and spread within networks. This threefold jump underscores the urgent need for ongoing and robust credential management combined with proactive threat detection. Modern infostealer malware orchestrates multi-stage style heists blending stealth, automation, and persistence. With legitimate processes cloaking malicious operations and actual day-to-day network traffic hiding nefarious data uploads, bad actors can exfiltrate data right under your security team's proverbial nose, no Hollywood-style "smash-and-grab" needed. Think of it as the digital equivalent of a perfectly choreographed burglary. 

Daily Tech Digest - December 05, 2023

Post-Quantum Cryptography: The lynchpin of future cybersecurity

Since we are still at least a decade away from an ideal quantum computer, this may not seem like an imminent threat. However, this is not the case, since Annealing quantum computers are already a reality. While these are not capable of utilising Shor’s algorithm, they can solve the factoring problem by formulating it as an optimization problem and have already made much progress. Furthermore, there is also the problem of “harvest now, decrypt later,” which essentially means that an attacker can steal data now, wait until quantum computers become a practical reality, and subsequently decrypt it at a later time. This implies that quantum computers already pose a very real threat, without even coming into existence. There is a distinct possibility that large amounts of data have already been compromised and the rectification of this problem is an immediate concern, which is why the incorporation of PQC into current encryption protocols is absolutely imperative. For instance, according to IBM’s “Cost of a data breach Report 2023,” more than 95 percent of the organisations studied globally have experienced more than one data breach.


Payments for net zero – How the payments industry can contribute towards decarbonisation

It is crucial to involve senior leaders in comprehending the compelling reasons for both commercial and societal urgency to decarbonise. Furthermore, gaining insight into how various stakeholders (ranging from employees, investors, and regulators to civil society) are progressively aligning with the necessity for businesses and society to undergo decarbonisation will fortify the approach. This alignment creates a potent mandate and a unique opportunity for the payment network to discern and investigate its distinct role in facilitating the transition toward net zero. ... Payment networks & fintechs should allocate sufficient resources to explore alignment between their core capabilities and sectors/systems needing to decarbonise. This may involve investing in sustainability and climate change expertise within core teams such as data, product innovation, and strategy. Additionally, conducting robust research on trends and carbon impacts in various economic sub-sectors can help overlay payment networks’ capacities to pinpoint net-zero solutions. Engaging with external stakeholders can also aid in identifying and testing potential opportunity areas.


How AI-assisted code development can make your IT job more complicated

Increased use of AI will also mean personalization becomes an important skill for developers. Today's applications "need to be more intuitive and built with the individual user in mind versus a generic experience for all," says Lobo. "Generative AI is already enabling this level of personalization, and most of the coding in the future will be developed by AI." Despite the rise of generative technology, humans will still be required at key points in the development loop to assure quality and business alignment. "Traditional developers will be relied upon to curate the training data that AI models use and will examine any discrepancies or anomalies," Lobo adds. Technology managers and professionals will need to assume more expansive roles within the business side to ensure that the increased use of AI-assisted code development serves its purpose. We can expect this focus on business requirements to lead to a growth in responsibility via roles such as "ethical AI trainer, machine language engineer, data scientist, AI strategist and consultant, and quality assurance," says Lobo. 


The more the CIO can function as a centralized source for technology resources, the better, says Ping Identity’s Cannava, who sees this transpiring in three phases, depending on the maturity of the organization. In Phase 1, the CIO is the clearinghouse for current technology projects, taking on the traditional role as in-house consultant. In Phase 2, the CIO becomes the clearinghouse for data within the organization. “In many cases, we are the keepers of the keys to datasets,” he says. “We have the ability to bring datasets together, and those insights could drive what the agenda is for the business. They could show us where we have the opportunity to improve our go-to-market. So having that access to the insights driving business intelligence initiatives has allowed us to expand our seat at the table.” In Phase 3, the CIO also becomes the clearinghouse for emerging technologies. Because, he says, to truly unlock the potential of all that data, you need artificial intelligence. And that raises some immediate questions for CIOs who want to be orchestrators. 


How DoorDash Migrated from Aurora Postgres to CockroachDB

Until the monolith was broken up, it offered a single view of the toll that demand for the application was taking on the databases. But once that monolith was broken into microservices, that visibility would disappear. “Our biggest enemy was the single primary architecture of our database,” Salvatori said. “And our North Star would be to move to a solution that offered multiple writers.” In the meantime, the DoorDash team adopted a “poor man’s solution,” approach to dealing with its overmatched database architecture, Salvatori told the Roachfest audience: building vertical federation of tables, while not blocking microservices extractions. In this game of “whack-a-mole,” he said, “Different tables would be able to get their own single writer and therefore scale a little bit and allow us to keep the lights on for a little bit longer. But we needed to take steps toward limitless horizontal scalability.” Cockroach, a distributed SQL database management system, seemed like the right answer.


Taming the Virtual Threads: Embracing Concurrency With Pitfall Avoidance

When a virtual thread needs to process a long computation, the virtual thread excessively occupies its carrier thread, preventing other virtual threads from utilizing that carrier thread. For example, when a virtual thread repeatedly performs blocking operations, such as waiting for input or output, it monopolizes the carrier thread, preventing other virtual threads from making progress. Inefficient resource management within the virtual thread can also lead to excessive resource utilization, causing monopolization of the carrier thread. Monopolization can have detrimental effects on the performance and scalability of virtual thread applications. It can lead to increased contention for carrier threads, reduced overall concurrency, and potential deadlocks. To mitigate these issues, developers should strive to minimize monopolization by breaking down lengthy computations into smaller, more manageable tasks to allow other virtual threads to interleave and utilize the carrier thread.


The all-flash datacentre: Mirage or imminent reality?

The initial advantage of flash over HDDs was speed. Flash was adopted in workstations and laptops, and in enterprise servers running performance-critical and especially I/O-dependent applications. Flash’s performance edge is greatest on random reads and writes. The gap is narrower for sequential read/write operations. A well-configured HDD array with flash-based caching comes close enough to all-flash speeds in real-world environments. “It does depend what infrastructure you have and what characteristics you are looking for from your storage,” says Roy Illsley, chief analyst of IT operations at Omdia. “That includes performance on read, on writes, capacity. The most appropriate [storage] for your needs could be flash, or just as equally spinning media. All flash datacentres may be a reality where workloads require the strength of flash, but I am not expecting all-flash datacentres to become commonplace.” According Rainer Kaise, senior manager of business development at Toshiba Electronics Europe – a hard drive manufacturer – 85% of the world’s online media is still stored on HDDs.


How cybersecurity teams should prepare for geopolitical crisis spillover

It is one thing to understand why geopolitical spillover impacts private enterprise but another to be able to assign any kind of probability of risk to them. Fortunately, research on global cyber conflict and enterprise cybersecurity provide a reasonable starting point for dealing with this uncertainty. Scholars and policy commentators are interested in linking the realities of cyber operations to situational risk profiles, particularly for non-degradation threats for which traditional security assessment processes tend to be sufficient. Performative attacks come with perhaps the most obvious set of threat indicators. Companies that are "named and shamed" during geopolitical crisis moments tend to have one of two characteristics. First, their symbolic profile is constitutionally indivisible in the context of the current conflict. This means that a firm from its statements, actions, or productions clearly underwrites one side in conflict. Media organizations that consistently toe a national line such as Russia's Pravda are an example of this, but so are firms with leaders or major stakeholders belonging to ethnic, religious, or linguistic backgrounds pertinent to a crisis


Can cloud computing be truly federated?

The core idea is to save money, but it requires accepting that the physical resources could be scattered in any system willing to be part of the federated cloud. I’m not going to think about this in silly ways, in that we’re going to take over someone’s smartwatch as a peer node, but there is a vast quantity of underutilized hardware out there, still running and connected to a network in an enterprise data center that could be leveraged for this model. The idea of a federated public cloud service does exist today at varying degrees of maturation, so please don’t send me an angry email telling me your product has been doing this for years and that I’m somehow a bad person for not knowing it existed. As I said, federation is an old architectural concept many have adopted. What is new is bringing it to a widely used public cloud computing platform, which we haven’t seen yet for the most part. In this approach, a centralized system coordinates the provisioning of traditional cloud services such as storage and computing between the requesting peer and a peer that could provide that service.


How AI is revolutionizing “shift left” testing in API security

SAST and DAST are well-established web application tools that can be used for API testing. However, the complex workflows associated with APIs can result in an incomplete analysis by SAST. At the same time, DAST cannot provide an accurate assessment of the vulnerability of an API without more context on how the API is expected to function correctly, nor can it interpret what constitutes a successful business logic attack. In addition, while security and AppSec teams are at ease with SAST and DAST, developers can find them challenging to use. Consequently, we’ve seen API-specific test tooling gain ground, enabling things like continuous validation of API specifications. API security testing is increasingly being integrated into the API security offering, translating into much more efficient processes, such as automatically associating appropriate APIs with suitable test cases. A major challenge with any application security test plan is generating test cases tailored explicitly for the apps being tested before release. 



Quote for the day:

"The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself." -- Mark Caine

Daily Tech Digest - November 09, 2023

MIT Physicists Transform Pencil Lead Into Electronic “Gold”

MIT physicists have metaphorically turned graphite, or pencil lead, into gold by isolating five ultrathin flakes stacked in a specific order. The resulting material can then be tuned to exhibit three important properties never before seen in natural graphite. ... “We found that the material could be insulating, magnetic, or topological,” Ju says. The latter is somewhat related to both conductors and insulators. Essentially, Ju explains, a topological material allows the unimpeded movement of electrons around the edges of a material, but not through the middle. The electrons are traveling in one direction along a “highway” at the edge of the material separated by a median that makes up the center of the material. So the edge of a topological material is a perfect conductor, while the center is an insulator. “Our work establishes rhombohedral stacked multilayer graphene as a highly tunable platform to study these new possibilities of strongly correlated and topological physics,” Ju and his coauthors conclude in Nature Nanotechnology


Conscientious Computing – Facing into Big Tech Challenges

The tech industry has driven incredibly rapid innovation by taking advantage of increasingly cheap and more powerful computing – but at what unintended cost? What collateral damage has been created in our era of “move fast and break things”? Sadly, it’s now becoming apparent we have overlooked the broader impacts of our technological solutions. As software proliferates through every facet of life and the scale of it increases, we need to think more about where this leads us from people, planet and financial perspectives. ... The classic Scope, Cost, Time pyramid – but often it’s the **observable ** functional quality that is prioritised. For that I’ll use a somewhat surreal version of an iceberg – as so much of technical (and effectively sustainability debt – a topic for a future blog) is hidden below the water line. Every engineering decision (or indecision) has ethical and sustainability consequences, often invisible from within our isolated bubbles. Just as the industry has had to raise its game on topics such as security, privacy and compliance, we desperately need to raise our game holistically on sustainability.


The CIO’s fatal flaw: Too much leadership, not enough management

So why does leadership get all the buzz? A cynic might suggest that the more respect doing-the-work gets, the more the company might have to pay the people who do that work, which in turn would mean those who manage the work would get paid more than those who think and charismatically express deep and inspirational thoughts. And as there are more people who do work than those who manage it, respecting the work and those who do it would be expensive. Don’t misunderstand. Done properly, leading is a lot of work, and because leading is about people, not processes or tools and technology; it’s time consuming, too. And in fact, when I conduct leadership seminars, the biggest barrier to success for most participants is figuring out and committing to their time budget. Leadership, that is, involves setting direction, making or facilitating decisions, staffing, delegating, motivating, overseeing team dynamics, engineering the business culture, and communicating. Leaders who are committed to improving at their trade must figure out how much time they plan to devote to each of these eight tasks, which is hard enough.


The Next IT Challenge Is All about Speed and Self-Service

One of the most significant roadblocks to rapid cloud adoption is sheer complexity. Provisioning a cloud environment involves dozens of dependent services, intricate configurations, security policies and data governance issues. The cognitive load on IT teams is significant, and the situation is exacerbated by manual processes that are still in place. The vast majority of engineering teams still depend on legacy ticketing systems to request IT for cloud environments, which adds a significant load on IT and also slows engineering teams. This slows down the entire operation, making it difficult for IT and engineering to support business needs effectively. In fact, in one study conducted by Rafay Systems, application developers at enterprises revealed that 25% of organizations reportedly take three months or longer to deploy a modern application or service after its code is complete. The real goal for any IT department is to support the needs of the business. Today, they do that better, faster and more cost-effectively by leveraging cloud technologies to realize all the business benefits of the modern applications being deployed.


The DPDP Act: Bolstering data protection & privacy, making India future-ready

The DPDP Act has a direct impact across industries. Organisations not only need to reassess their existing compliance status and gear up to cope with the new norms but also create a phased action plan for various processes. Moreover, if labeled as SDF, organisations also need to appoint a Data Protection Officer (DPO). In addition, organisations need to devise appropriate data protection and privacy policy framework in alignment with the DPDP Act. Further, consent forms and mechanisms have to be developed to ensure standard procedures as laid out in the legislation. Companies have to additionally invest to adopt the necessary changes in compliance with the law. They need to list down their third-party data handlers, consent types and processes, privacy notices, contract clauses, categorise data, and develop breach management processes. Sharing his perspective on the DPDP Act, Amit Jaju, Senior Managing Director, Ankura Consulting Group (India) says, “The Digital Personal Data Protection Act 2023 has ushered in a new era of data privacy and protection, compelling solution providers to realign their business strategies with its mandates. 


Will AI hurt or help workers? It's complicated

Here's what is certain: CIOs see AI as being useful, but not replacing higher-level workers. JetRockets recently surveyed US CIOs. In its report, How Generative AI is Impacting IT Leaders & Organizations, the custom-software firm found that CIOs are primarily using AI for cybersecurity and threat detection (81%), with predictive maintenance and equipment monitoring (69%) and software development / product development (68%) in second and third place, respectively. Security, you ask? Yes, security. CrowdStrike, a security company, sees a huge demand building for AI-based security virtual assistants. A Gartner study on virtual assistants predicted, "By 2024, 40% of advanced virtual assistants will be industry-domain-specific; by 2025, advanced virtual assistants will provide advisory and intervention roles for 30% of knowledge workers, up from 5% in 2021." By CrowdStrike's reckoning, AI will "help organizations scale their cybersecurity workforce by three times and reduce operating costs by close to half a million dollars." That's serious cash.


From Chaos to Confidence: The Indispensable Role of Security Architecture

Beyond mere firefighting, security architecture embraces the proactive art of strategic defense. It takes a risk-based approach to identifying potential threats, assessing weak points in an organization's IT stack, architecting forward-looking designs and prioritizing security initiatives. By aligning security investments with the organization's risk tolerance and business priorities, security architecture ensures that precious resources are optimally allocated for maximum security defense designed with in-depth zero trust security principles in mind. This reduces enterprise application deployment and operational security costs. It is similar to designing high-rise buildings in a standard manner, following all safety codes and by-laws while still allowing individual apartment owners to design and create their homes as they would prefer. Cyberattacks have become increasingly sophisticated and frequent. As a result, it is imperative for defense systems to have comprehensive, purpose-built architectures and designs in place to protect against such threats. Security architecture provides a complete defense framework by integrating various security components


Top 5 IT disaster scenarios DR teams must test

Failed backups are some of the most frequent IT disasters. Businesses can replace hardware and software, but if the data and all backups are gone, bringing them back might be impossible or incredibly expensive. Sys admins must periodically test their ability to restore from backups to ensure backups are working correctly and the restore process does not have some unseen fatal flaw. At the same time, there should always be multiple generations of backups, with some of those backup sets off site. ... Hardware failure can take many forms, including a system not using RAID, a single disk loss taking down a whole system, faulty network switches and power supply failures. Most hardware-based IT disaster scenarios can be mitigated with relative ease, but at the cost of added complexity and a price tag. One example is a database server. Such a server can be turned into a database cluster with highly available storage and networking. The cost for doing this would easily double the cost of a single nonredundant server. Administrators would also have to undergo training to manage such an environment.


Mastering AI Quality: Strategies for CDOs and Tech Leaders

Most chief data officers (CDOs) work hard to make their data operations into “glass boxes” --transparent, explainable, explorable, trustworthy resources for their companies. Then comes artificial intelligence and machine learning (AI/ML), with their allure of using that data for ever-more impressive strategic leaps, efficiencies, and growth potential. However, there’s a problem. Nearly all AI/ML tools are “black boxes.” They are so inscrutable even their creators are concerned about how they produce their results. The speed and depth at which these tools can process data without human intervention or input presents a danger to technology leaders seeking control of their data and who want to ensure and verify the quality of analytics that use it. Combine this with a push to remove humans from the decision loop and you have a potent recipe for decisions to go off the rails. ... With a human collaborator or a human-designed algorithm, it is generally easy to elicit a meaningful response to the question, “Why is this result what it is?” With AI -- and generative AI in particular -- that may not be the case.


Revamping IT for AI System Support

“It’s important for everybody to understand how fast this [AI] is going to change,” said Eric Schmidt, former CEO and chairman of Google. “The negatives are quite profound.” Among the concerns is that AI firms still had “no solutions for issues around algorithmic bias or attribution, or for copyright disputes now in litigation over the use of writing, books, images, film, and artworks in AI model training. Many other as yet unforeseen legal, ethical, and cultural questions are expected to arise across all kinds of military, medical, educational, and manufacturing uses.” The challenge for companies and for IT is that the law always lags technology. There will be few hard and fast rules for AI as it advances relentlessly. So, AI runs the risk of running off ethical and legal guardrails. In this environment, legal cases are likely to arise that define case law and how AI issues will be addressed. The danger for IT and companies is that they don’t want to be become the defining cases for the law by getting sued. CIOs can take action by raising awareness of AI as a corporate risk management concern to their boards and CEOs.



Quote for the day:

"Holding on to the unchangeable past is a waste of energy and serves no purpose in creating a better future." -- Unknown