Daily Tech Digest - November 11, 2025


Quote for the day:

"The measure of who we are is what we do with what we have." -- Vince Lombardi



Your passwordless future may never fully arrive

The challenges are many. Beyond legacy industrial systems, homegrown apps, door/facility access systems, and IoT, even routine workgroup deployment of passwordless solutions is anything but routine. Different operating systems and specialized access requirements typically translate to enterprises needing to roll out multiple passwordless packages, which can be expensive and time-consuming, and create operational delays and other friction. Worst of all, it can create new security holes as attackers try to slip between the cracks of those multiple passwordless systems. ... “Passwordless implementations typically leave a dangerous blind spot. Passwords are still there, lurking inside the passkey enrollment and recovery flows,” says Aaron Painter, CEO of Nametag. “Think of it this way: How do you really know who’s enrolling or resetting a passkey? Attackers don’t have to break the cryptography of passkeys. They go after the weakest link, whether it’s a helpdesk call, an SMS code, or a ‘can’t access my passkey’ button. By keeping both a password and a passkey, organizations multiply their attack surface.” ... Part of the passwordless debate focuses on ROI strategies. The proverbial gold at the end of the rainbow is having all password credentials eliminated. That means an attacker with a 12-month-old admin password from a breach of a partner company would have nothing of value. But as long as some passwords must be supported, the risk of such an attack remains.


CISOs are cracking under pressure

Most CISOs surveyed experienced a major security incident in the last six months. For most, that level of disruption has become normal. More than half said they are personally blamed when breaches occur, and fear their job would be at risk if a serious incident happened under their watch. That sense of personal accountability stands out because many breaches occur despite defenses being in place. Fifty-eight percent of CISOs said at least one recent incident happened even though a tool was supposed to stop it. The researchers say this gap between investment and outcome has left security leaders exposed to reputational and career risk for problems that are often beyond their control. ... Most CISOs say they can quantify risk, but more than half admit they lack standardized, business-focused metrics that make sense to leadership. Boards often want trendlines that show risk is declining or metrics that link incidents to business outcomes. Without these, the conversation between CISOs and directors can break down. This disconnect means security leaders are often held accountable without being equipped to demonstrate progress in the terms boards expect. The researchers note that aligning on a shared understanding of risk is key to reducing tension and helping CISOs do their jobs. ... Many CISOs say they’re being pushed to use AI to cut costs and automate tasks, with some already under formal mandates and others feeling growing pressure from leadership. That puts CISOs in a difficult position.


The Sustainable Transformation Roadmap: Rethink, Align, Deploy

A significant part of the success stems from the weeding out process of debt. Like technical debt in IT systems, process debt refers to the accumulation of outdated procedures, inefficient workflows, and redundant steps that have built up over years of incremental changes. These legacy practices hinder productivity and make it challenging to fully realize the benefits of digital solutions. Overall, while process debt is rampant, forward-thinking organizations succeed by treating automation as a catalyst for redesign, not a quick fix—potentially unlocking millions in annual savings per use case. To sidestep potential traps, organizations should prioritize process optimization before they begin automating. This entails a focus on audit and redesign, starting with thorough process mapping to identify process debt. ... Also, start small and iterate by piloting in low-risk areas, such as invoice chasing or design reviews, measuring against baselines to ensure automation resolves, not replicates, debt. Failure to confront process debt, Yousufani said, leads to a familiar pitfall associated with “citizen-led transformation.” Organizations distribute productivity tools hoping employees will optimize their own workflows. But at best, this bottom-up innovation results in minor efficiency gains: “If an individual deployed Gen AI, and they gain—in a best-case scenario—10% productivity, that’s 10% for one employee. The gains are much greater when looking at an entire process transformation,” he said.


Building Resilient Platforms: Insights from Over Twenty Years in Mission-Critical Infrastructure

In technology terms, a platform represents a set of integrated technologies used as a base to develop other applications or processes. The best platform builders succeed when they are taken for granted, seeing success not in recognition, but in silence. Users can work without ever thinking about the underlying infrastructure, because the platforms simply function, consistently and reliably, making them invisible. ... Successfully hiding complexity while delivering powerful functionality defines platform excellence. The sophisticated engineering underneath should remain invisible to users who simply want to accomplish their tasks without friction. ... Stability means consistent, reliable operation at all times. However, achieving stability through stagnation creates security vulnerabilities from unpatched systems. Patching introduces changes that can impact stability while enabling security.  ... The temptation to defer maintenance always exists, but falling behind creates insurmountable technical debt. From a security perspective, increased exploitation of zero day vulnerabilities by bad actors demonstrate how quickly deferred maintenance becomes crisis management. Staying evergreen requires eternal vigilance and commitment. Once you fall behind, catching up becomes nearly impossible. This principle demands upfront planning and unwavering execution.


From Data Transfer to Data Trust

As more businesses move to hybrid and multi-cloud environments, data exchanges happen across different infrastructures and jurisdictions, which adds to the complexity and risk. Old models that only look at the perimeter are no longer enough. Instead, companies need a model in which trust is not taken for granted but is always checked. Gartner (2023) says that trust should be built into every transaction, every request for access, and every exchange of data. ... Businesses need to take a big-picture view based on the following pillars to build a trusted data integration framework: Authentication and Authorization: Use strict identity controls like OAuth 2.0, SAML, and context-aware Multi-Factor Authentication (MFA). API gateways should enforce role-based access and rate limiting. Transport Layer Security (TLS) should encrypt data while it is being sent, and Advanced Encryption Standard (AES) should be used to encrypt data while it is at rest. Use checksums, digital signatures, and data validation protocols to ensure the data is correct. Monitoring and Observability: Use observability platforms like ELK Stack, Prometheus, or Splunk to monitor logs, metrics, and traces in real time. Principles of Site Reliability Engineering (SRE) say that you should set up Service-Level Indicators (SLIs), Service-Level Objectives (SLOs), and automatic incident detection.


Who Owns the Cybersecurity of Space?

There is no comprehensive and binding international cybersecurity framework governing satellites, orbital systems or ground-to-space communications. Australia's growing space sector, spanning manufacturing in South Australia, launch facilities in the Northern Territory and emerging tracking infrastructure in Queensland, is expanding quickly. ... Many satellites, especially those launched before 2020, lack encryption or rely on outdated telemetry protocols. A single compromised ground station could trigger cascading effects across dependent systems. A man-in-the-middle attack in orbit would not simply exfiltrate data. It could spoof navigation, interrupt emergency communications or feed falsified intelligence to defense networks. We saw a warning sign in the ViaSat KA-SAT attack during the early stages of the Russia-Ukraine conflict, which temporarily crippled satellite communications across Europe. ... Many satellites, especially those launched before 2020, lack encryption or rely on outdated telemetry protocols. A single compromised ground station could trigger cascading effects across dependent systems. A man-in-the-middle attack in orbit would not simply exfiltrate data. It could spoof navigation, interrupt emergency communications or feed falsified intelligence to defense networks. ... For cybersecurity professionals, space is now a part of your threat landscape. Whether you work in defense, telecommunications, energy or government, your organization likely depends on orbital networks.


AI & phishing attacks highlight human risk in Australian fraud

Cybercriminals continue to rely on phishing attacks, exploiting trust and human error to initiate breaches. Despite ongoing investment in advanced detection technologies, there is widespread agreement that improving behavioural awareness within organisations is crucial. ... Salehi highlighted the growing sophistication of AI-powered attacks, describing how threat actors automate reconnaissance and deploy harder-to-detect campaigns. "As AI reshapes the threat landscape, these human vulnerabilities become even more exploitable. Threat actors are using AI to automate reconnaissance and craft highly personalised phishing campaigns that are faster, more convincing and far harder to detect," said Salehi. He went further to advocate for a risk-based security approach, aligning protection with business priorities and focusing on critical assets. "To counter this, organisations must adopt a risk-based approach that aligns security investments to business context - prioritising protection of the assets most critical to operations and continuity, while investing equally in human-centric education and training to recognise AI-generated phishing and deepfake content," said Salehi. ... Fraud schemes are also evolving beyond traditional IT boundaries, impacting operational processes and supply chains. Complex webs of partners and suppliers increase the risk of unnoticed manipulation and data leaks, particularly as generative AI technology is embedded across business operations.


The AI revolution has a power problem

In the race for AI dominance, American tech giants have the money and the chips, but their ambitions have hit a new obstacle: electric power. "The biggest issue we are now having is not a compute glut, but it's the power and...the ability to get the builds done fast enough close to power," Microsoft CEO Satya Nadella acknowledged on a recent podcast with OpenAI chief Sam Altman. "So if you can't do that, you may actually have a bunch of chips sitting in inventory that I can't plug in," Nadella added. ... Already blamed for inflating household electricity bills, data centers in the United States could account for 7% to 12% of national consumption by 2030, up from 4% today, according to various studies. But some experts say the projections could be overblown. "Both the utilities and the tech companies have an incentive to embrace the rapid growth forecast for electricity use," Jonathan Koomey, a renowned expert from UC Berkeley, warned in September. ... Tech giants are quietly downplaying their climate commitments. Google, for example, promised net-zero carbon emissions by 2030 but removed that pledge from its website in June. Instead, companies are promoting long-term projects. Amazon is championing a nuclear revival through Small Modular Reactors (SMRs), an as-yet experimental technology that would be easier to build than conventional reactors.


Cut Lead Time In Half With Pragmatic Agile

Agility isn’t sprints; it’s small, reversible changes flowing safely to users. We get there by adopting trunk-based development, feature flags, and explicit WIP limits. Trunk-based means branches live hours, not weeks. We merge small increments behind flags, ship to production early, and turn features on when we’re ready. Review stays fast because the surface area is small. If we need to bail out, we toggle the flag off and fix forward. No hero rollbacks, no 2 a.m. conference bridge. Feature flags don’t need to be fancy at the start, but they must be disciplined: clear names, default off, auditability, and a plan to retire them. Tooling is personal preference; control plane matters less than consistency. We like OpenFeature because it’s vendor-neutral and simple. ... Boring deploys are the highest compliment. We get them by codifying our path to production and reducing manual gates. Start with a trunk-based pipeline that runs unit tests, security checks, build, and deploy in the same PR context. Then add guardrails: environment protection rules, small canaries, and automatic rollbacks if health checks dip. ... Agile claims to balance speed with quality, but without SLOs we end up arguing feelings. Service-level objectives anchor our pace to user impact. We pick a few golden signals per service—availability, latency, error rate—and set realistic targets based on current performance and business expectations.


EU Set the Global Standard on Privacy and AI. Now It’s Pulling Back

The EU’s landmark AI Act entered into force earlier this year but will not fully apply until 2026. Reporting by MLex, Reuters, and Financial Times indicates that the European Commission is considering changes that could delay enforcement and reduce transparency. Under the proposals, companies deploying high-risk AI systems could receive a one-year grace period before fines and other obligations take effect. This would particularly benefit providers that already placed generative AI systems on the market, giving them time to adjust without disrupting operations. Draft documents also suggest postponing penalties for transparency violations, such as failing to clearly label AI-generated content, until August 2027. MLex reported that the package would also make compliance easier for companies and centralize enforcement through a new EU AI office. ... The proposal is still being discussed within the Commission and could change before November 19. Once adopted, it will head to EU governments and the European Parliament for approval. Privacy advocates have criticized the fast-track process of the Digital Omnibus. While the GDPR took years to negotiate, public consultation on the Omnibus only concluded in October. According to noyb, some Brussels units had just five working days to review a 180+ page draft. The Commission has not prepared impact assessments, saying the proposed changes are “targeted and technical.”

Daily Tech Digest - November 10, 2025


Quote for the day:

"You can only lead others where you yourself are willing to go." -- Lachlan McLean



CISOs must prove the business value of cyber — the right metrics can help

With a foundational ERM program, and by aligning metrics to business priorities, cybersecurity leaders can ultimately prove the value of the cyber security function. Useful metrics examples in business terms include maturity, compliance, risk, budget, business value streams, and status of SecDevOps (shifting left) adoption, Oberlaender explains. But how does a cybersecurity expert learn what’s important to the business? ... “Boards are faced with complex matters such as impact on interest rates, tariffs, stock price volatility, supply chain issues, profitability, and acquisitions. Then the CISO enters the boardroom with their MITRE Attack framework, patching metrics and NIST maturity models,” Hetner continues. “These metrics are not aligned to what the board is conditioned to reviewing.” ... Rather than just asking “are we secure?” business leaders are asking what metrics their cyber components are using to measure and quantify risk and how they’re spending against those risks. For CISO’s, this goes beyond measuring against frameworks such as NIST, listing a litany of security vulnerabilities they patched, or their mean time to response. “Instead, we can say, ‘This is our potential financial exposure’,” Nolen explains. “So now you’re talking dollars and cents rather than CVEs and technical scores that board members don’t care about. What they care about is the bottom line.” 


Feeding the AI beast, with some beauty

AI-driven growth is placing an unprecedented load on data centres worldwide, and India is poised to shoulder a large share of the incremental electricity, real estate, and cooling burden created by rising AI demand. The IEA has estimated a trajectory that AI is accelerating at a rapid pace. Under realistic scenarios, AI workloads alone could require on the order of 1–1.5 GW of continuous IT power—equivalent to 8.8–13 TWh annually—in India by 2030. This translates into a significant new draw on grids, water resources, and capex for cooling and power infrastructure. Recent analyses indicate that while AI’s share of data centre power today stands in the single-digit to low-teens range, it could climb to 20–40 per cent or more by 2030 in some scenarios, fundamentally reshaping the power-consumption profile of digital infrastructure. ... As data centres grow in scale, sustainability is becoming a competitive differentiator—and that’s where Life Cycle Assessments (LCAs) and Environmental Product Declarations (EPDs) play a critical role. An LCA is a systematic method for evaluating the total environmental impact of a product, process, or system across its entire life cycle. For a data centre, this spans both upstream (embodied) impacts—such as construction materials, IT equipment manufacturing, and cooling and power infrastructure including gensets—as well as operational impacts like electricity consumption. 


8 IT leadership tips for first-time CIOs

Generally speaking, the first three years can make or break your IT leadership career, given that digital leaders globally tend to stay at one company for just over that length of time on average, according to the 2025 Nash Squared Digital Leadership Report. CIOs looking to sidestep that statistic are taking intentional measures, ensuring they get early wins, and perhaps most importantly, not coming into their role with preconceived ideas about how to lead or assuming what worked in a past job can be replicated. ... The CTO of staffing and recruiting firm Kelly says that “building momentum, finding ways to get quick wins from the low hanging fruit” will help build credibility with the leadership team. Then, you can parlay those into bigger wins and avoid spinning out, he says. ... While making connections and establishing relationships is critical, Lewis stresses the importance of not rushing to change things right away when you’re new to the job. “Let it set for a while,” he says. ... This is especially true of midsize and larger midsize organizations “where the clarity of strategy and clarity of what’s important … isn’t always well documented and well thought out,” Rosenbaum says. Knowing the maturity of your organization is really important, he says. “Some CIO roles are just about keeping the lights on, making sure security is good at a lower level. As the company starts to mature, they start thinking about technology as an enabler, and to that end, they start having maybe a more unified technology strategy.”


Drata’s VP of Data on Rethinking Data Ops for the AI Era: Crawl, Walk, Run — Then Sprint

While GenAI may be the shiny new tool, Solomon makes it clear that foundational work around ingestion and transformation is far from trivial. “We live and die by making sure that all the data has been ingested in a fresh manner into the data warehouse,” he explains. He describes the “bread and butter” of the team: synchronizing thousands of MySQL databases from a single-tenant production architecture into the warehouse — closer to real-time. “We do a lot of activities with regard to the CDC pipeline, which is just like driving terabytes of data per day.” But the data team isn’t working in isolation. GTM executives return from conferences excited about GenAI. ... Rather than building fully-fledged pipelines from day one, the team prioritizes quick feedback loops — using sandboxes, cloud notebooks, or Streamlit apps to test hypotheses. Once business impact is validated, the team gradually introduces cost tracking, governance, and scalability. If a stakeholder’s hypothesis lacks merit, there is no point in building complex data pipelines, governance frameworks, or cost-tracking systems. This shift in mindset, he explains, is something many data teams are grappling with today. Traditionally, data teams were trained to focus on building scalable, robust pipelines from day one — often requiring significant upfront effort. But this often led to cost inefficiencies and delays.


Model Context Protocol Servers: Build or Buy?

"The tension lies in whether you have the sustained capacity to keep pace with protocols that are still being debated by their maintainers," said Rishi Bhargava, co-founder at Descope, a customer and agentic IAM platform. "Are you prepared to build the plane while it's flying, or would you rather upgrade a finished plane mid-flight?" ... "From a business perspective, the build versus buy decision for MCP servers boils down to strategic priorities and risk appetite," Jain said. Building MCP servers in-house gives you "complete control," but buying provides "speed, reliability, and lower operational burden," he said. But others think there's no reason to rush your decision. ... "Most companies shouldn't be doing either yet," he said, explaining that companies should first focus on the specific business goals they are trying to achieve, rather than on which existing applications they think should have AI features added. "Build when you have an actual AI application that requires custom data integration and you understand exactly what intelligence you're trying to deploy. If you're simply connecting ChatGPT to your CRM, you don't need MCP at all," Prywata said. ... "It is usually best to build [MCP servers] in-house when compliance, performance tuning, or data sovereignty are key priorities for the business," said Marcus McGehee, founder at The AI Consulting Lab. 


Every CIO Fails; The Smart Ones Admit It

There's a "hero CIO" myth deeply rooted in our mindset - the idea that you're the person who makes technology work, no matter what. Admitting failure feels like admitting incompetence, especially in boardrooms where few understand the complexity of IT. Organizational incentives also discourage openness. Many companies punish failure more than they reward learning. I've seen talented CIOs denied promotion because of a single delayed project, even when their broader portfolio delivered value. When institutional memory focuses on what went wrong rather than what was learned, people stop taking risks. The second factor is C-suite politics. In some environments, transparency becomes ammunition. Another team might use a project delay to justify requests for budget increases or to exert influence. And finally, CIOs worry about vendor perception, admitting setbacks could impact pricing, support or their reputation with partners. ... Build your transparency muscle in peacetime, not when something is on fire. By the time a crisis hits, it's too late to establish credibility. Make transparency habitual. Share work in progress, not just results. Celebrate learning, not perfection. Run "pre-mortems" where you assume a project failed and work backwards to identify what could go wrong. And when you make a mistake, own it publicly. The honesty earns you more trust than a polished explanation ever will.


6 proven lessons from the AI projects that broke before they scaled

In analyzing dozens of AI PoCs that sailed on through to full production use — or didn’t — six common pitfalls emerge. Interestingly, it’s not usually the quality of the technology but misaligned goals, poor planning or unrealistic expectations that caused failure. ... Define specific, measurable objectives upfront. Use SMART criteria. For example, aim for “reduce equipment downtime by 15% within six months” rather than a vague “make things better.” Document these goals and align stakeholders early to avoid scope creep. ... Invest in data quality over volume. Use tools like Pandas for preprocessing and Great Expectations for data validation to catch issues early. Conduct exploratory data analysis (EDA) with visualizations (like Seaborn) to spot outliers or inconsistencies. Clean data is worth more than terabytes of garbage. ... Start simple. Use straightforward algorithms like random forest or XGBoost from scikit-learn to establish a baseline. Only scale to complex models — TensorFlow-based long-short-term-memory (LSTM) networks — if the problem demands it. Prioritize explainability with tools like SHAP  to build trust with stakeholders. ... Plan for production from day one. Package models in Docker containers and deploy with Kubernetes for scalability. Use TensorFlow Serving or FastAPI for efficient inference. Monitor performance with Prometheus and Grafana to catch bottlenecks early. Test under realistic conditions to ensure reliability.


Andela CEO talks about the need for ‘borderless talent’ amid work visa limitation

Globally, three of four IT employers say they lack the tech talent they need, and the outlook will only get more dire as AI creates a demand for high-skilled specialists like data engineers, senior architects, and agentic orchestrators. Visa programs aren’t designed by the laws of supply and demand. They’re defined by policy makers and are updated infrequently. So, they’ll never truly be in sync with the needs of the labor market. ... Brilliant people exist around the world. It’s why they want to sponsor people for H-1B visas. But hiring outside of those traditional pathways — to work with a brilliant machine learning engineer from Cairo or São Paulo, for example — is…a long, painful process that takes months and is inaccessible to them. They don’t know that they can find the right partner, someone who has sorted this all out and vetted talent and developed compliance with global labor and tax laws, etc. Once they understand that those partners exist, the global workforce becomes instantly accessible to them. ... Technical hiring still feels like a gamble, even though software development is, relatively speaking, packed with deterministic skills. There are two main problems. One problem is the data problem. There’s not enough reliable data about what a job actually requires and what a worker is capable of doing. Today, we rely on resumes and job descriptions. 


The Overwhelm Epidemic: Why Resilience Begins with You

People have so much to do and not enough time. There’s nothing new with the phenomena of not enough time to do what needs to be done, but today it’s different. Today, it’s unique because this feeling of overwhelm has been continuously expanding since early 2020 as we experienced the pandemic. We’re being overwhelmed to an extent most people are not experienced to deal with.
For you in operational resilience, I believe self-care is more critical now than it has ever been. You are only able to help your clients and their systems be resilient to the extent you are taking care of yourself and are resilient. ... Most say something like, “I’m going to double down and focus on this. I’m going to work harder and spend as much time as needed, even if it means cutting into my already precious personal time.” They think working harder is the best approach, but here’s the thing—they are wrong.
When you are operating at high-stress levels, introducing more stress by doubling down and working harder, actually reduces your output. ... Bottom line, a thriving, elite mindset is the foundation of personal wellbeing and professional success. 
Turning to positive psychology, underlying Martin Seligman‘s model for human flourishing, are 24 positive character strengths. While more research is still needed, the research to date has concluded that of the 24, the best predictor of living a flourishing, thriving life is gratitude.


Ask a Data Ethicist: What Are the Impacts of AI on Creativity, Schools, and Industry?

Generally speaking, if the goal is to reduce the cost of labour by replacing it with equipment (capital – or AI), then assuming the AI tool replaces the labour in a way that is acceptable to drive the desired outputs the business could possibly drive more profit. So that might be construed as positive for the business. However, businesses exist in the bigger context of society. To take an extreme example, if a large section of the population loses their jobs, they can’t buy your products, and that could hurt your organization. It also puts more burdens on society for a social safety net, perhaps resulting in tax increases or some other impacts to business to pay for those services. ... I think it’s important to disclose the use of AI in a process. For video, audio or images – a symbol or some text to say “AI generated” can accomplish that goal. There is also watermarking that content which is a more technical method. For text, it’s trickier. I don’t think everyone needs to be told about every instance of a spellchecker (to use an extreme example) but if the whole thing is generated, then it is important to say that. This is where a policy can be helpful. For example, one might apply the 80/20 rule – if less than 20% is generated, perhaps it’s not necessary to disclose it. That said, there better not be any inaccuracies or errors in the content if you choose NOT to disclose it. See this case in Australia. This is an example of why I think disclosing, overall, is a good idea.

Daily Tech Digest - November 09, 2025


Quote for the day:

"The only way to achieve the impossible is to believe it is possible." -- Charles Kingsleigh



Way too complex: why modern tech stacks need observability

Recent outages have demonstrated that a heavy dependence on digital systems can leading to cascading faults that can halt financial transactions, disrupt public transportation and even bring airport operations to a standstill. ... To operate with confidence, businesses must see across their entire digital supply chain, which is not possible with basic monitoring. Unlike traditional monitoring, which often focuses on siloed metrics or alerts, observability provides a unified, real-time view across the entire technology stack, enabling faster, data-driven decisions at scale. Implementing real-time, AI-powered observability covers every component from infrastructure and services to applications and user experience. ... Observability also enables organizations to proactively detect anomalies before they escalate into outages, quickly pinpoint root causes across complex, distributed systems and automate response actions to reduce mean time to resolution (MTTR). The result is faster, smarter and more resilient operations, giving teams the confidence to innovate without compromising system stability, a critical advantage in a world where digital resilience and speed must go hand in hand. Resilient systems must absorb shocks without breaking. This requires both cultural and technical investment, from embracing shared accountability across teams to adopting modern deployment strategies like canary releases, blue/green rollouts and feature flagging. 


Radical Empowerment From Your Leadership: Understood by Few, Essential for All

“Radical empowerment, for me, isn’t about handing people a seat at the table. It’s about making sure they know the seat is already theirs,” said Trenika Fields, Business Legal, AI Leader at Cisco, MIT Sloan EMBA Class of ’26. “I set the vision and I trust my team to execute in ways that are anchored in the mission and tied to real business outcomes. But trust without depth doesn’t work. That’s where leading with empathy comes in. It’s my secret sauce, and it has to be real. You can’t fake it. People know when it’s performative. Real empathy builds confidence, and confidence fuels bold, decisive execution. When people feel seen, trusted, and strategically aligned, they lead like builders, not bystanders. Strip that trust and empathy away, and radical disempowerment moves in fast. Voices go quiet. Momentum dies. Innovation flatlines. But when you get it right, you don’t just build teams. You build powerhouses that set the standard and raise the bar for everyone else.” Why, given how simple this is, is it so hard for senior leadership to do versus say? I worked in an environment years ago when “radical candor” was the theme du jour rather than “radical empowerment.” An executive over an executive over my boss was explaining radical candor, which very simply put, being constructive and forthright with empathy to help others grow. 


Banks Can Convert Messy Data into Unstoppable Growth

Banks recognize the potential in tapping a trove of customer data, much of it unstructured, as a tool to personalize interactions and become more proactive. They are sitting on a goldmine of unstructured information hidden in PDFs, scanned forms, call notes and emails — data that, once cleaned and organized, can unlock new business opportunities, says Drew Singer, head of product at Middesk. ... The ability to successfully turn data into insights often depends on clear parameters for how data is handled. This includes a shared understanding of who owns the data, how it will be managed and stored, and a defined governance structure — possibly through committees — for overseeing its use, Deutsch says. "If you don’t set these rules, once data starts flowing, you will lose control of it. You will most likely lose quality," he says. ... With the data governance structure firmly in place, FIs are positioned to use additional tools to garner action-oriented insights across the organization. Truist Client Pulse, for example, uses AI and machine learning to analyze customer feedback across channels. ... "We’ve got a population of teammates using the tool as it stands today, to better understand regional performance opportunities …what’s going well with certain solutions that we have, and where there are areas of opportunity to enhance experience and elevate satisfaction to drive to client loyalty," says Graziano. 


Securing Digital Supply Chains: Confronting Cyber Threats in Logistics Networks

Modern logistics networks are filled with connected devices — from IoT sensors tracking shipments and telematics in trucks, to automated sorting systems and industrial controls in smart warehouses and ports. This Internet of Things (IoT) revolution offers incredible efficiency and real-time visibility, but it also increases the attack surface. Each connected sensor, RFID reader, camera, or vehicle telemetry unit is essentially an internet entry point that could be exploited if not properly secured. The spread of IoT devices introduces new vulnerabilities that must be managed effectively. For example, a hacker who hijacks a vulnerable warehouse camera or temperature sensor might find a way into the larger corporate network. ... The tightly interwoven nature of modern supply chains amplifies the impact of any single cyber incident, highlighting the importance of robust cybersecurity measures. Companies are now digitally linked with vendors and logistics partners, sharing data and connecting systems to improve efficiency. However, this interdependence means that a security failure at one point can quickly spread outward. ... While large enterprises may invest heavily in cybersecurity, they often depend on smaller partners who might lack the same resources or maturity. Global supply chains can involve hundreds of suppliers and service providers with varying security levels. 


For OT Cyber Defenders, Lack of Data Is the Biggest Threat

Data in the OT and ICS world is transient, said Lee. Instructions - legitimate, or not - flow across the network. Once executed, they vanish. "If I don't capture it during the attack, it's gone," Lee said. Post-incident forensics is basically impossible without specialized monitoring tools already in place. "So for the companies that aren't doing that data collection, that monitoring, prior to the attacks, they have no chance at actually figuring out if a cyberattack was involved or not." And that is a problem when nation-state adversaries have pre-positioned themselves within the networks of critical infrastructure providers, apparently ready to pivot to OT exploitation in time of conflict. ... Even when critical infrastructure operators do capture OT monitoring data, the sheer complexity of modern industrial processes means that finding out what went wrong is difficult. The inability to make use of more detailed data is an indicator of immaturity in the OT security space, Bryson Bort told Information Security Media Group. "The way I summarize the OT space is, it's a generation behind traditional IT," said Bort, a U.S. Army veteran and founder of the non-profit ICS Village. Bort helps organize the annual Hack the Capitol event, but he makes his living selling security services to critical infrastructure owners and operators. Most operators still don't have visibility into the ICS devices on their work, Bort said. "What do I have? What assets are on my network?"


Cross-Border Compliance: Navigating Multi-Jurisdictional Risk with AI

The digital age has turned global expansion from an aspiration into a necessity. Yet, for companies operating across multiple countries, this opportunity comes wrapped in a Gordian knot of cross-border compliance. The sheer volume, complexity, and rapid change of multi-jurisdictional regulations—from GDPR and CCPA on data privacy to complex Anti-Money Laundering (AML) and financial reporting rules—pose an existential risk. What seems like a local detail in one jurisdiction may spiral into a costly mistake elsewhere. ... AI helps with cross-border compliance by automating risk management through real-time monitoring, analyzing vast datasets to detect fraud, and keeping up with constantly changing regulations. It navigates complex rules by using natural language processing (NLP) to interpret regulatory texts and automating tasks like document verification for KYC/KYB processes. By providing continuous, automated risk assessments and streamlining compliance workflows, AI reduces human error, improves efficiency, and ensures ongoing adherence to global requirements. AI, specifically through technologies like Machine Learning (ML) and Natural Language Processing (NLP), is the critical tool for cutting compliance costs by up to 50% while drastically improving accuracy and speed. AI and machine learning (ML) solutions, often referred to as RegTech, are streamlining compliance by automating tasks, enhancing data analysis, and providing real-time insights.


Best Practices for Building an AI-Powered OT Cybersecurity Strategy

One challenge in defending OT assets is that most industrial facilities still rely on decades-old hardware and software systems that were not designed with modern cybersecurity in mind. These legacy systems are often difficult to patch and contain documented vulnerabilities. Sophisticated adversaries know this and exploit these outdated systems as a point of entry. ... OT cybersecurity and regulatory compliance are tightly linked in manufacturing, but not interchangeable. Consider regulatory compliance the minimum bar you must clear to stay legally and contractually safe. At the same time, cybersecurity is the continuous effort you must take to protect your systems and operations. Manufacturers increasingly must prove OT cyber resilience to customers, partners, and regulators. A strong cybersecurity posture helps ensure certifications are passed, contracts are won, and reputations are protected. ... AI is a powerful tool for bolstering OT cybersecurity strategies by overcoming the common limitations of traditional, rule-based defenses. AI, whether machine learning, predictive AI, or agentic AI, provides advanced capabilities to help defenders detect threats, automate responses, manage assets, and enhance vulnerability management. ... Human oversight and expertise are vital for ensuring AI quality and contextual accuracy, especially in safety-critical OT environments. 


Training Data Preprocessing for Text-to-Video Models

Getting videos ready for a dataset is not merely a checkbox task - it’s a demanding, time-consuming process that can make or break the final model. At this stage, you’re typically dealing with a large collection of raw footage with no labels, no descriptions, and at best limited metadata like resolution or duration. If the sourcing process was well-structured, you might have videos grouped by domain or category, but even then, they’re not ready for training. The problems are straightforward but critical: there’s no guiding information (captions or prompts) for the model to learn from, and the clips are often far too long for most generative architectures, which tend to work with a context window (length of the video, like number of tokens for Large Language Models) measured in tens of seconds, not minutes. ... It might seem like the fastest approach is to label every scene you have. In reality, that’s a direct route to poor results. After all the previous steps, a dataset is rarely clean: it almost always contains broken clips, low-quality frames, and clusters of near-identical segments. The filtering stage exists to strip out this noise, leaving the model only with content worth learning from. This ensures that the model doesn’t spend time on data that won’t improve its output. ... Building a proper text-to-video dataset is an extremely complex task. However, it is impossible to build a text-to-video generation model without a good dataset.


Putting Design Thinking into Practice: A Step-by-Step Guide

The key aim of this part of the design process is to frame your problem statement. This will guide the rest of your process. Once you’ve gathered insights from your users, the next step is to distil everything down to the real issue. There are many ways to do this, but if you’ve spoken to several users, start by analysing what they said to find patterns — what themes keep coming up, and what challenges do they all seem to face? ... Once you’ve got your problem statement, the next step is to start coming up with ideas. This is the fun part! The aim of this part of idea generation is not to find the perfect idea straight away, but to come up with as many ideas as possible. Quantity matters more than quality right now. Start by brainstorming everything that comes to mind, no matter how unrealistic it sounds. At this point, quantity matters more than quality — you can always refine later. Write your ideas down, sketch them, or talk them through with friends or teammates. You might be surprised at how one silly suggestion sparks a genuinely good idea. ... Testing is the “last” stage of the design process. I say last with a bit of hesitation, because while it is technically last on the diagram, you are guaranteed to get a lot of feedback that will require you to go back to earlier stages of the design process and revisit ideas.


Beyond Resilience: How AI and Digital Twin technology are rewriting the rules of supply chain recovery

For decades, supply chain resilience meant having backup plans, alternate suppliers, safety stock, and crisis playbooks. That model doesn’t hold anymore. In a post-pandemic world shaped by trade wars, climate volatility, and technology shocks, disruptions are neither rare nor isolated. They’re structural. ... The KPIs of resilience have evolved. In most companies, traditional metrics like on-time delivery or supplier lead time fail to capture the system’s true flexibility. Modern analytics teams are redefining the measurement architecture around three key indicators: Mean time to recovery (MTTR): the time between initial disruption and full operational stability;  Conditional value-at-risk (CVaR): a probabilistic measure of financial exposure under extreme stress; Supply network resilience index (SNRI): a composite score tracking substitution agility and cross-tier visibility. ... A hidden benefit of this new approach is its environmental alignment. When Schneider Electric built a multi-tier AI twin for its Asia-Pacific operations, it discovered that optimizing for resilience, diversifying ports, balancing lead times, and automating inventory allocation also reduced carbon intensity per unit shipped by 12%; This was not the goal, but it proved that sustainability and resilience share a common denominator: Efficiency. The smarter the network, the smaller its waste footprint. In boardrooms today, that realization is quietly rewriting ESG strategy.

Daily Tech Digest - November 08, 2025


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



We can’t ignore cloud governance anymore

Many organizations are still treating cloud governance as an afterthought. Instead, enterprises pour resources into migration and adoption at the expense of creating a governance framework meant to manage risks proactively. This oversight leads to the type of major outages and service disruptions we’ve seen recently, which cost companies millions of dollars and erode brand trust. Events like these aren’t inevitable. With proper governance structures in place, much of the fallout can be mitigated or avoided altogether. ... Risks that were irrelevant five years ago, such as cloud-native application security or hybrid cloud architecture vulnerabilities, are now front and center. Enterprises must rethink their approach to risk in the cloud, from redefining acceptable levels of exposure to embedding automated tools that dynamically address vulnerabilities before they evolve into crises. In the book, we cover strategies for incorporating dynamic risk management tools, compliance structures, and a culture of accountability throughout an enterprise’s operations. ... The majority of enterprises are rolling the dice. The belief that cloud computing inherently eliminates risks is a dangerous misconception; without guardrails and policies to control how the cloud operates within an organization, risks can grow unchecked. Enterprises are unknowingly declining millions of dollars in potential savings simply because they don’t invest in governance.


The Art of Lean Governance: The Cybernetics of Data Quality

Without this cybernetic interplay, data governance devolves into static policy documents rather than a living, self-correcting mechanism. For risk officers and auditors, this distinction defines whether data risk is truly controlled or merely reported. The systems that thrive will be those that can self-correct faster than they degrade. ... Traditional data risk management has focused on frameworks, thresholds, and remediation logs. The cybernetic view goes further: it treats risk as system entropy — the measure of disorder introduced when feedback loops are weak or delayed. Consider financial reconciliation. When the flow of transactional data between ledgers, systems, and reports is disrupted, discrepancies emerge. If the feedback mechanism (the reconciliation engine) is not fast or intelligent enough, the delay amplifies uncertainty across dependent systems, and risk compounds through interconnection. Thus, data risk management is a function of response latency and feedback precision. Modern systems must evolve toward autonomous reconciliation, utilizing pattern recognition and AI-assisted anomaly detection to maintain equilibrium in near real-time. This is cybernetic risk control — adaptive, responsive, and context-aware. ... Cybernetics thrives on understanding the flow of energy, signals, and cause and effect. Data lineage is the cybernetic map of that flow. It illustrates how data is transformed, where it originates, and how it propagates through systems. 


Role Reversals: How AI Trains Humans

In some cases, LLMs can shape how people think about topics such as culture, morality, and ethics. At some point, these complex feedback loops blur the line between human and machine thinking—including who is teaching whom. “Research shows that it’s possible to influence the vocabulary of large populations—potentially on a global scale. This shift in language can, in turn, reshape thinking, culture, and public discourse,” said Hiromu Yakura ... In fact, human behavior changes significantly when people use AI, according to a study from a research group at Washington University in St. Louis, MO. Using the behavioral economic bargaining tool Ultimatum Game, they found that study participants who thought their actions would help train an AI system were more likely to reject an “unfair” payout—even when it came at a personal cost. The reason? They wanted to teach AI what’s fair. ... AI-generated language can also help spread bias, misinformation, and narrow the way people think—including by design. Today, social media algorithms amplify and bury content to dial up user engagement. In the future, governments, political strategists, and others could tap AI-generated language to sway—and perhaps manipulate—public opinion. AI researchers like Treiman, already uneasy about how little is known about the inner workings of most algorithms, are raising red flags. Secrecy, she argued, leaves the public in the dark about systems that increasingly shape daily life.


How Data Is Reshaping Science – Part 1: From Observation to Simulation

With so much data and powerful AI models at their fingertips, researchers are doing more and more of their work inside machines. Across many fields, experiments that once started in a lab now begin on a screen. AI and simulation have flipped the order of discovery. In many cases, the lab has become the final step, not the first. You can see this happening in almost every area of science. Instead of testing one idea at a time, researchers now run thousands of simulations to figure out which ones are worth trying in real life. Whether they’re working with new materials, brain models, or climate systems, the pattern is clear: computation has become the proving ground for discovery. ... Scientists aren’t just testing hypotheses or peering into microscopes anymore. More and more, they’re managing systems — trying to stop models from drifting, tracking what changed and when, making sure what comes out actually means something. They’ve gone from running experiments to building the environment where those experiments even happen. And whether they’re at DeepMind, Livermore, NOAA, or just some research team spinning up models, it’s the same kind of work. They’re checking whether the data is usable, figuring out who touched it last, wondering if the labels are even accurate. AI can do a lot, but it doesn’t know when it’s wrong. It just keeps going. That’s why this still depends on the human in the loop.


ID verification laws are fueling the next wave of breaches

The cybersecurity community has long lived by a simple principle: Don't collect more data than you can protect. But ID laws and other legal mandates now force many organizations to store massive amounts of sensitive data, putting them in the precarious situation of dealing with information they don’t necessarily want but have to safeguard. ... ge verification laws are proliferating worldwide. These laws typically mandate age verification through government-issued documents, such as driver's licenses, passports or national ID cards. Failure to verify IDs can result in millions of dollars in fines. The intention is sensible: protecting minors from inappropriate online content. But for the organizations that have to collect ID data, the laws can lead to a security nightmare. Organizations now have to collect and store volumes of the most sensitive personally identifiable information possible regardless of whether they have the infrastructure to adequately protect it — or even want to collect it. ... When backup, endpoint protection, disaster recovery and security monitoring operate through a single agent with one management console, there are no handoff points where data might be exposed and no integration vulnerabilities to exploit, and there is no confusion about which tool protects what. Native integration delivers practical benefits beyond security. MSPs can reduce the administrative burden of managing multiple vendor relationships, licenses and support contracts.


Is enterprise agentic AI adoption matching the hype?

“The expectations around AI and agents are huge. And vendors are making statements that all you need to solve your enterprise problems is to unleash an army of agents,” van der Putten tells ITPro. “But if not properly controlled and governed, this army is more likely to go and wreak havoc than bring peace and prosperity in the enterprise. And enterprises know this.” According to van der Putten, today’s AI agents are unable to take the real-world complexity into account, which the majority of enterprises need to deal with. And the thing that makes them appealing — their apparent autonomy — is also their biggest weakness. “Enterprises want to innovate, but they are held back by legacy,” van der Putten explains. ... "The sticking point isn’t the technology – it’s trust. Agents can already reconcile accounts, flag anomalies, even anticipate compliance risks, but adoption will only scale once businesses have confidence in how they operate, explain their reasoning, and can be audited.” Nowhere is the issue of trust more apparent than in the world of commerce, where AI agents are being used as assistants and autonomous actors, capable of initiating and completing purchases independently of the shopper. ... Although agentic commerce promises to streamline the path to purchase for businesses, Sheikrojan says that it’s a path paved with “blind spots”. This is because when an AI agent takes over the transaction many of today’s retail processes, rooted in context and behavioral signals such as fraud prevention, disappear.


Power, not GPUs, will decide who wins AI

AI workloads scale differently from traditional IT. Where once we worried about server density in kilowatts per rack, we’re now talking about megawatts. That kind of thermal and electrical load exposes the inadequacies of legacy architectures built for virtualisation, not for vector processing or massive parallel training. As Stephen Worn put it, “AI isn’t just another workload; it’s a demanding tenant.” It’s a tenant with unpredictable consumption, heat spikes, and sub-millisecond tolerance for power fluctuation. And it’s not just moving in – it’s taking over. ... Downtime in AI is more than an outage; it’s a lost training cycle, corrupted model, or missed opportunity. Resilience in this context isn’t just about redundancy; it’s about reaction time. We need systems that operate on the same timelines as the workloads they protect. ... In a sense, the infrastructure must become intelligent; just like the workloads it supports. Data centres are evolving into living ecosystems, where compute behavior and physical response are tightly intertwined. ... So what does this all point to? Here’s a realistic, aspirational view of what AI-ready infrastructure could look like by the end of the decade: Hybrid Power Architectures: Combining traditional grid feeds, on-site renewables, and modular battery systems; Resilience by Design: Low-toxicity chemistries, automated failover, and microsecond response baked into every rack; AI-Managed AI Infrastructure: Neural networks monitoring and adjusting the environments they run in.


The Ultimate Betrayal: When Cyber Negotiators Became the Attackers

The allegations outline an audacious and calculated scheme that exploits the foundational trust between a victim and its incident response team. The indictment claims the defendants utilized the notorious BlackCat (ALPHV) ransomware variant to compromise targeted organizations. The irony, as noted by CNN, is that the accused were professionals whose entire business model was predicated on helping victims recover from these exact kinds of intrusions. The DOJ effectively accuses the U.S. ransomware negotiators of "launching their own ransomware attacks," according to TechCrunch. ... "'Zero Trust' is not just a security framework for your network; it must now be seen as a security framework that includes not just your network, but all the people and devices that have any type of access to it," Leighton said. "As a former intelligence officer, I couldn't help but think of Edward Snowden and how he compromised NSA's networks." "This case just proves that we have to extend our personnel vetting processes beyond our own organizations," he added. "We need to be able to also vet the employees of our suppliers, as well as those whose job it is to remediate breaches of our networks. This is easier said than done, but CISOs are going to have to work with their corporate legal teams to rewrite supplier contracts so they can vet third-party remediation team personnel independently."


Infostealers: Addressing a rising threat to UK businesses

Multiple infostealers exist, but several have been more dominant during 2025, according to experts. Raccoon Stealer stands out as the most frequently encountered infostealer, accounting for the highest volume of incidents, according to Rozenski. Despite law enforcement disruption, LummaStealer remains “one of the most prolific infostealers,” says Addison. It operates under a MaaS model, making it “accessible to a wide range of threat actors,” he says. ... Predictably, AI is also set to super-charge infostealer attacks. Walter says SentinelOne is now tracking for a new AI-assisted infostealer it calls Predator AI. “The malware doesn’t just steal passwords and credentials. It integrates with ChatGPT to analyse huge amounts of stolen data to identify high-value accounts and business domains.” Predator AI is also able to organise the stolen data, enabling cybercriminals to “operate more efficiently” and “increase the speed and volume of attacks,” he says. “While this infostealer isn’t a game-changer yet, it shows where cybercriminals are investing their resources and what businesses should look out for next.“ ... At the same time, breaking single sign on journeys is “crucial” for critical applications, says Gee. He recommends requiring users to revalidate MFA when accessing critical applications, making sure admins are required to also do so.


EU lawmakers approve regulation to expand Europol’s capabilities in biometric data processing

European lawmakers have backed a proposal to give Europol a central role in coordinating the fight against smuggling networks and human trafficking and to strengthen the obligation among EU member states to share data, including biometrics. The support for the regulation comes amid criticism from rights groups and the EU data watchdog. ... The regulation also enables Europol to “effectively and efficiently process biometric data in order to better support Member States in cracking down on irregular migration.” “The effective use of biometric data is key to closing the gaps and blind spots that terrorists and other criminals seek to exploit by hiding behind false or multiple identities,” says the document. ... “The Europol Regulation unlawfully expands the EU’s digital surveillance infrastructure without appropriate safeguards,” says the report. “This is particularly important in the context of biometrics.” Facing pushback, the EU introduced significant changes to the proposal in May, allowing more flexibility for EU member states to decide whether to exchange data with Europol. The presidency of the Council and European Parliament negotiators reached a provisional agreement on the regulation in September. Europol’s legal framework already allows the agency to process biometric data for operational purposes and for preventing or combating crime. 

Daily Tech Digest - November 07, 2025


Quote for the day:

"The best teachers are those who don't tell you how to get there but show the way." -- @Pilotspeaker



AI spending may slow down as ROI remains elusive

Some AI experts agree with Forrester that an AI market correction is on the way. Microsoft founder Bill Gates recently talked about the existence of an AI bubble, and industry observers have noted that some AI excitement is dimming. Many don’t see an AI bubble that will burst in the near future, but it’s deflating a bit. Still others don’t see much of a slowdown in the near term. ... Some organizations are not achieving the accuracy they need from AI tools, and others are not finding their data to be easily accessible or properly structured, says Sam Ferrise, CTO of IT consulting firm Trinetix. “Many organizations are realizing that their expectations for AI accuracy and performance don’t always align with the level of investment they’re willing — or able — to make,” he says. “The key is calibrating expectations relative to both the investment and the use case.” In other cases, enterprises deploying AI are running into privacy or security problems, he adds. “Many teams successfully prove a use case with clear ROI, only to realize later that they must harden the solution before it can safely move into production,” Ferrise says. “When that alignment isn’t there, it’s natural for organizations to pause or delay spending until they can justify the value.” The prospect of a bubble bursting may be an overly dramatic scenario, although not impossible, he adds. It’s been easy for organizations to overlook intangible costs such as training, compliance, and governance.


Why can’t enterprises get a handle on the cloud misconfiguration problem?

“Microsoft, Google, and Amazon have handed us a problem,” says Andrew Wilder, CSO at Vetcor, a national network of more than 900 veterinary hospitals. “By default, everything is insecure, and you have to put security on top of it. It would be much better if they just gave us out-of-the-box secure stuff. Would you buy a car that doesn’t have locks? They wouldn’t even sell that car.” This security gap is what allows third-party vendors to exist, he says. “You should be building products — and I’m talking to you, Google, Microsoft, and Amazon — that are secure by design, so you don’t have to get a third-party tool. They should be out of the box secure.” ... When administrators or users make changes to cloud configurations in the cloud management consoles, it’s difficult to track those changes and to revert them if something goes wrong. Plus, humans can easily make mistakes. The solution experts advise is to adopt the principle of “infrastructure as code” and use configuration management tools so that all changes are checked against policies, tracked and audited, and can easily be rolled back. ... Companies will often have monitoring for major cloud services, but shadow IT deployments are left in the dark. This is less a technology problem than a management one and can be addressed by better communications with business units and a more disciplined approach to deploying technology on an enterprise-wide level. 


The Supply Chain Blind Spot: Protecting Data in Expanding IT Ecosystems

Data growth is no longer linear, it is exponential. The rise of AI, automation, and digital platforms has transformed how information is created, stored, and shared. In India, this acceleration is particularly visible. The country’s data centre industry has grown from 590 MW in 2019 to 1.4 GW in 2024, a 139% jump, and is projected to reach 3 GW by 2030, driven by cloud adoption, AI demand, and data localisation initiatives. This infrastructure boom, while positive, brings new operational realities. Most enterprises now operate across hybrid environments, combining on-premises, public cloud and SaaS-based data stores. Without unified oversight, these fragmented environments risk becoming silos. True resilience depends not just on protecting data but understanding where it lives, how it moves, and who controls it. ... Globally, enterprises are reframing resilience as a core business capability. This approach requires integrating resilience principles into decision-making: from procurement and architecture design to crisis response. Simulated attacks, failover testing and dependency audits are becoming part of daily operational culture, not annual exercises. For Indian organizations, this mindset shift is vital. RBI’s ICT risk management directives and the DPDP Act establish the baseline; the differentiator lies in how proactively organizations operationalize these expectations. 


The power of low-tech in a high-tech world

Our high-tech society is impressive in the collective. But it robs individuals of skills. Most kids now can’t write cursive. And they can’t read it, either. They can’t read an analog clock or a paper map. The acceleration of technological innovation also accelerates the rate at which we lose skills. Videogames, smartphones, and dating apps — aided and abetted by the trauma of the COVID-19 lockdowns a few years ago — have left many young people alone without the skills to meet and connect with anyone, leading to a loneliness epidemic among the young. But losing old-fashioned skills and old-school tech knowledge is a choice we don’t have to make. ... Thousands of scientific reports all lead us to the same conclusion: Over-reliance on advanced technologies dulls critical thinking, weakens memory, reduces problem-solving skills, limits creativity, erodes attention spans, and fosters passive dependence on automated systems. ... What all these old-school approaches have in common is that they’re harder and take longer — and they leave you smarter and better connected. In other words, if you strategically cultivate the skills, habits, discipline and practice of older tech, you’ll be much more successful in your career and your life. And here’s one final point: The more high-tech our culture becomes, the more impactful old-school tech will be. So yes, by all means become brilliantly skilled at AI chatbot prompt engineering.


Why Leaders Cannot Outsource Communication

When communication is delegated to a proxy, that signal weakens. Employees notice the gap between what the leader says or doesn’t say, and what the organization does. This is why communication has an outsized impact on engagement. Gallup finds that 70% of the variance in employee engagement is explained by managers and leaders, not perks or policies. When leaders own the message, they create psychological safety: the sense that it’s safe to commit, speak up and take risks. When they don’t, that safety erodes. ... Delegating communication is tempting. Leaders are busy. They hire communications officers and agencies to manage the message. These roles are valuable, but they can’t substitute for the leader’s voice. A speechwriter can shape phrasing and a PR team can guide timing, but only the leader can deliver authenticity. As Murphy has written, “Leaders are accountable to employees: Candor about bad news as well as the good, and feedback that aligns with expectations.” Authenticity requires candor, even when the message is difficult. When communication comes from anyone else, it’s interpreted as institutional rather than personal. And people follow people, not institutions. ... The Operator Economy demands a new kind of scale, one built not on capital or code, but on human alignment. Communication is infrastructure. The CEO becomes the signal source around which all systems calibrate. When leaders “scale themselves” through clarity and consistency, they convert trust into throughput. 


Breaking the Burnout Cycle: How Smart Automation and ASPM Can Restore Developer Joy

Smart automation can rescue developers from repetitive drudgery by using AI to handle routine tasks like test writing, bug fixing, and documentation. Modern application security posture management (ASPM) platforms exemplify this approach by providing contextualized risk assessments rather than overwhelming vulnerability dumps, helping security teams first understand which issues actually matter and then giving developers actionable info on the risk and how it should be fixed. These platforms excel at managing the volume and unpredictability of AI-generated code, turning what was once a blind spot into manageable, prioritized work. ... Technology alone isn't enough. Organizations must also prioritize developer growth by creating opportunities for experimentation, architectural decisions, and end-to-end project ownership while automation handles routine tasks. This means shifting from measuring output volume to focusing on meaningful metrics like code quality and developer satisfaction. AI represents an opportunity for developers to gain expertise in an emerging technology.  ... The developer talent crisis is solvable. While AI has introduced new complexities to the software development and security landscape, it also presents unprecedented opportunities for organizations willing to rethink how they support their development teams.


The CIO’s Role In Data Democracy: Empowering Teams Without Losing Control

The modern CIO is at a point where they can choose between innovation and control. In the past, IT departments were thought of as people who took care of infrastructure and enforced strict regulations about who could access data. The CIO needs to reassess this way of doing things today. They shouldn’t prohibit access; instead, they should make it safe by building frameworks. The job has changed from saying “no” to making sure that when the company says “yes,” it does it smartly. The CIO is now both an architect and a guardian. They create systems that make data easy to get to, understand, and act on, all while keeping security and compliance in mind. ... The CIO is no longer a gatekeeper; they are instead a designer of trust. The goal is to make governance a part of systems such that it is seamless, automatic, and easy to use. This change lets companies keep an eye on things and stay in control without making decisions take longer. Unified data taxonomies are the first step in building this framework. This means that all departments use the same naming standards and definitions. When everyone uses the same “data language,” there is less confusion and more cooperation. ... Effective governance demands collaboration between IT, compliance, and business leaders. The CIO must champion cross-functional alignment where all parties share responsibility for data integrity and use.


What keeps phishing training from fading over time

Employees who want to be helpful or appear responsive can become easier targets than those reacting to fear or haste. For CISOs, this reinforces the need to teach users about manipulation through trust and cooperation, not just the warning signs of urgent or threatening messages. ... Dubniczky said maintaining employee engagement over time is a major challenge for most organizations. “In contrast with other research in the area, a key contribution of ours was a mandatory training after each failed phishing attack,” he explained. “This strikes a good balance between not needlessly bothering careful employees with monthly or quarterly trainings while making sure that the highest risk individuals are constantly trained.” He recommended that organizations vary their phishing simulations to keep users alert. “We’d recommend performing monthly penetration tests on smaller groups of people in diverse departments of the organization with a seemingly random pattern, and making re-training mandatory in case of successful attacks,” he said. “It’s also difficult to generalize on this, but this approach seems much more effective than periodic presentation-style trainings.” ... One of the most striking findings involves the timing of feedback. When employees clicked a phishing link and then received an immediate explanation and training prompt, they were far less likely to repeat the behavior. Around seven in ten employees who failed once did not do so again.


The new QA playbook: Leveraging AI to amplify expertise, not replace it

Many quality teams have been part of the AI journey from the very beginning, contributing from concept to implementation and helping evaluate large language models to ensure quality and reliability. However, many AI features are not developed by QA practitioners, so it is essential to evaluate them through a QA lens. First, ensure the system can produce what your teams actually use, whether that is step lists, BDD-style scenarios, or free text that fits your templates and automation. Next, map the full data journey. Know whether prompts or results are kept, how encryption and minimization are applied, and where any content is stored. Finally, require fine-grained controls so you can limit usage by environment, project, and role. Regulated teams require an audit trail and clear accountability, which means governance must keep pace with adoption, or speed will outpace safety. Once review-first habits are in place, build on them. True oversight requires more than simply checking AI outputs; it demands deeper knowledge and understanding than the AI itself to spot gaps, inaccuracies, or misleading information. That’s what separates a passive reviewer from an effective human in the loop. ... Real gains from AI will not come from automation alone but from people who know how to guide it with clarity, context, and care. The future of testing depends on professionals who can combine technical fluency with critical thinking, ethical judgment, and a sense of ownership over quality.


Your outage costs more than you think – so design with resilience in mind

Service providers are under strain to deliver the rapid speeds and constant network uptime that modern life demands, with areas like remote working, financial transactions, cloud access and streaming services expected to work seamlessly as part of the daily lives of many end users. For many enterprises, their business depends on this connectivity. Even a single hour of network disruption can cost an organisation more than $300,000, and the long-term damage to customer trust often exceeds any immediate financial loss. Despite this, many organisations still rely on outdated infrastructure that cannot support the requirements of today’s end users. Legacy environments struggle with explosive data growth, the soaring demands of AI, and the complexity of distributed, cloud-first applications. At the same time, power limitations, infrastructure strain and inconsistent service levels put businesses at risk of falling behind. The gap between what service providers and enterprises need, and what their infrastructure can deliver, is widening. ... For service providers, investing in robust colocation and high-performance networking is not just about upgrading infrastructure, but enabling customers and partners worldwide to thrive in today’s fast-paced digital landscape. By offering resilient and scalable connectivity, providers can differentiate their service offering, attract high-value enterprise clients, and create new revenue streams based on reliability and performance.