Showing posts with label talent. Show all posts
Showing posts with label talent. Show all posts

Daily Tech Digest - May 02, 2026


Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” - Norman Vincent Peale

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 17 mins • Perfect for listening on the go.


The architectural decision shaping enterprise AI

In "The architectural decision shaping enterprise AI," Shail Khiyara argues that the long-term success of enterprise AI initiatives hinges on an often-overlooked architectural choice: how a system finds, relates, and reasons over information. The article outlines three primary patterns—vector embeddings, knowledge graphs, and context graphs—each offering unique advantages and trade-offs. Vector embeddings excel at identifying semantically similar unstructured data, making them ideal for rapid RAG deployments, yet they lack deep relational understanding. Knowledge graphs provide precise, traceable answers by mapping explicit relationships between entities, though they are resource-intensive to maintain. Crucially, Khiyara introduces context graphs, which capture the dynamic reasoning behind decisions to ensure continuity across multi-step workflows. Unlike static models, context graphs treat reasoning as a first-class data artifact, allowing AI to understand the "why" behind previous actions. The most effective enterprise strategies do not choose one in isolation but instead layer these patterns to balance speed, precision, and contextual awareness. Ultimately, Khiyara warns that leaving these decisions to default configurations leads to "confident mistakes" and trust erosion. For CIOs, intentional architectural design is not just a technical necessity but a fundamental business imperative to transition from isolated pilots to scalable, reliable AI ecosystems that deliver genuine organizational value.


The Evidence and Control Layer for Enterprise AI

The article "The Evidence and Control Layer for Enterprise AI" by Kishore Pusukuri argues that the transition from AI prototypes to production requires a robust architectural layer to manage the inherent unpredictability of agentic systems. This "Evidence and Control Layer" acts as a shared platform substrate that mediates between agentic workloads and enterprise resources, shifting governance from retrospective reviews to proactive, in-path execution controls. The framework is built upon three core pillars: trace-native observability, continuous trace-linked evaluations, and runtime-enforced guardrails. Unlike traditional logging, trace-native observability captures the complete execution path and decision context, providing the foundation for operational trust. Continuous evaluations act as quality gates, while runtime guardrails evaluate proposed actions—such as tool calls or data transfers—before side effects occur, ensuring safety and compliance in real-time. By formalizing policy-as-code and generating structured evidence events, the layer ensures that every material action is explicit, auditable, and cost-bounded. Ultimately, this centralized approach accelerates enterprise adoption by providing reusable governance defaults, effectively closing the "stochastic gap" and transforming black-box agents into trusted, scalable enterprise assets that operate with clear authority and within defined budget constraints.


Organizational Culture As An Operating System, Not A Values System

In the article "Organizational Culture As An Operating System, Not A Values System," the author argues that the traditional definition of culture as a static set of internal values is no longer sufficient in a hyper-connected world. Modern organizational culture must be reframed as a dynamic operating system that bridges internal decision-making with external community engagement. While internal culture dictates how information flows and authority is exercised, external culture defines how a brand interacts with decentralized movements in art, fashion, and social identity. The disconnect often arises because corporate hierarchies prioritize control and predictability, whereas external cultural trends move at a high velocity from the periphery. To remain relevant, organizations must shift from a "broadcast" model to one of "co-creation," where authority is distributed to those closest to social signals and speed is enabled by trust rather than bureaucratic process. By treating culture with the same rigor as any other core business function, leaders can diagnose internal friction and align incentives to ensure the organization moves at the "speed of culture." Ultimately, success depends on building internal systems that allow companies to participate in and shape cultural conversations in real time, moving beyond corporate manifestos to authentic community collaboration.


Re‑Architecting Capability for AI: Governance, SMEs, and the Talent Pipeline Paradox

The article "Re-architecting Capability for AI Governance: SMEs and the Talent Pipeline Paradox" examines the profound obstacles small and medium-sized enterprises encounter while attempting to establish formal AI oversight. Central to the discussion is the "talent pipeline paradox," which describes how the concentration of AI expertise within large technology firms creates a vacuum that leaves smaller organizations vulnerable. To address this, the author advocates for a strategic shift from talent acquisition to capability re-architecting. Rather than competing for scarce high-end specialists, SMEs should integrate AI governance into their existing business architecture through modular and risk-based frameworks. This approach emphasizes the importance of leveraging cross-functional internal teams, automated tools, and external partnerships to manage algorithmic risks effectively. By focusing on scalable governance patterns and clear accountability, SMEs can achieve ethical and regulatory compliance without the overhead of massive administrative departments. Ultimately, the piece suggests that the key to overcoming resource limitations lies in structural agility and the democratization of governance tasks. This enables smaller firms to harness the transformative power of artificial intelligence safely while maintaining a competitive edge in an increasingly automated global marketplace where talent remains the ultimate bottleneck.


The AI scaffolding layer is collapsing. LlamaIndex's CEO explains what survives

In this VentureBeat interview, LlamaIndex CEO Jerry Liu explores the significant transformation occurring within the "AI scaffolding" layer—the software stack connecting large language models to external data and applications. As frontier models increasingly incorporate native reasoning and retrieval capabilities, Liu suggests that simplistic RAG wrappers are rapidly losing their utility, leading to a "collapse" of the middle layer. To survive this consolidation, infrastructure tools must evolve from thin architectural shells into robust systems that manage complex data pipelines and orchestrate sophisticated agentic workflows. Liu emphasizes that while base models are becoming more powerful, they still lack the specialized, proprietary context required for high-stakes enterprise tasks. Consequently, the future of AI development lies in solving "hard" data problems, such as handling heterogeneous sources and ensuring data quality at scale. Developers are encouraged to pivot away from basic integration toward building deep, specialized intelligence layers that provide the structured context models inherently lack. Ultimately, the survival of platforms like LlamaIndex depends on their ability to offer advanced orchestration and data management that transcends the capabilities of the base models alone, marking a shift toward more resilient and professionalized AI engineering.


Guide for Designing Highly Scalable Systems

The "Guide for Designing Highly Scalable Systems" by GeeksforGeeks provides a comprehensive roadmap for building architectures capable of managing increasing traffic and data volume without performance degradation. Scalability is defined as a system’s ability to grow efficiently while maintaining stability and fast response times. The guide highlights two primary scaling strategies: vertical scaling, which involves enhancing a single server’s capacity, and horizontal scaling, which distributes workloads across multiple machines. To achieve high scalability, the article emphasizes the importance of architectural decomposition and loose coupling, often implemented through microservices or service-oriented architectures. Key components discussed include load balancers for even traffic distribution, caching mechanisms like Redis to reduce backend load, and advanced data management techniques such as sharding and replication to prevent database bottlenecks. Furthermore, the guide covers essential architectural patterns like CQRS and distributed systems to improve fault tolerance and resource utilization. Modern applications must account for various non-functional requirements such as availability and consistency while scaling. By prioritizing stateless designs and avoiding single points of failure, organizations can create robust systems that handle peak usage and unpredictable growth effectively. Ultimately, designing for scalability requires balancing cost, performance, and complexity to ensure long-term reliability in a dynamic digital landscape.


Why Debugging is Harder than Writing Code?

The article "Why Debugging is Harder than Writing Code" from BetterBugs examines the fundamental reasons why developers spend nearly half their time fixing issues rather than creating new features. The core difficulty lies in the disparity between the "happy path" of initial development and the exponential state space of potential failures. While writing code involves building a single successful outcome, debugging requires navigating a combinatorially vast range of unexpected inputs and conditions. This process imposes a significant cognitive load, as developers must maintain a massive context window—often jumping between different files, servers, and logs—which incurs heavy switching costs. Furthermore, modern complexities like distributed systems, non-deterministic concurrency, and discrepancies between local and production environments add layers of friction. In concurrent systems, for instance, the mere act of observing a bug can change the timing and make the issue disappear. Ultimately, the article argues that debugging is more demanding because it forces engineers to move beyond theoretical models and confront the messy realities of hardware limits, memory leaks, and network latency. To manage these challenges, the author suggests that teams must prioritize observability and evidence-based reporting tools to bridge the gap between mental models and actual system behavior, ensuring more predictable software lifecycles.


Cybersecurity: Board oversight of operational resilience planning

The A&O Shearman guidance emphasizes that as cyberattacks grow more sophisticated and regulatory scrutiny intensifies, boards must adopt a proactive stance toward operational resilience. With the emergence of unpredictable criminal gangs and AI-driven threats, it is no longer sufficient to treat cybersecurity as a purely technical issue; it is a critical governance priority. To exercise effective oversight, boards should appoint dedicated individuals or committees to monitor cyber risks and ensure that Business Continuity and Disaster Recovery (BCDR) plans are robust, defensible, and accessible offline. Practical preparations must include clear decision-making protocols and alternative communication channels, such as Signal or WhatsApp, for use during systems outages. Additionally, leadership should oversee the development of pre-approved communication templates for stakeholders and define strict Recovery Time Objectives (RTOs). A cornerstone of this framework is the implementation of regular tabletop exercises and technical recovery drills that involve third-party providers to identify vulnerabilities. By documenting these proactive measures and integrating lessons learned into evolving strategies, boards can meet regulatory expectations for evidence-based oversight. Ultimately, this comprehensive approach to resilience planning helps organizations minimize the risk of material revenue loss and navigate the complexities of a volatile global digital landscape.


Beyond the Region: Architecting for Sovereign Fault Domains and the AI-HR Integrity Gap

In "Beyond the Region," Flavia Ballabene argues that software architects must evolve their definition of resilience from surviving mechanical failures to navigating "Sovereign Fault Domains." Traditionally, redundancy across Availability Zones addressed physical infrastructure outages; however, modern geopolitical shifts and evolving privacy laws now create "blast radii" where data becomes legally trapped or AI models suddenly non-compliant. Ballabene highlights an "AI-HR Integrity Gap," where centralized systems fail to account for regional jurisdictional constraints. To bridge this, she proposes shifting toward sovereignty-aware infrastructures. Key strategies include Managed Sovereign Cloud Models, which leverage localized partner-led controls like S3NS or T-Systems, and Cell-Based Regional Architectures, which deploy independent stacks for each major market to eliminate reliance on a global control plane. These approaches allow organizations to maintain operational continuity even when specific regions face regulatory upheavals. By auditing AI dependency graphs and prioritizing data residency, executives can transform compliance from a burden into a competitive advantage. Ultimately, the article suggests that in a fragmented global cloud, the most resilient HR and technology stacks are those built on digital trust and localized integrity, ensuring they remain robust against both technical glitches and the unpredictable tides of international policy.


Designing resilient IoT and Edge Computing with federated tinyML

The article "Real-time operating systems for embedded systems" (available via ScienceDirect PII: S1383762126000275) provides a comprehensive examination of the architectural requirements and performance constraints inherent in modern real-time operating systems (RTOS). As embedded devices become increasingly integrated into safety-critical infrastructure, the study highlights the transition from simple cyclic executives to sophisticated, preemptive multitasking environments. The authors analyze key RTOS components, including deterministic scheduling algorithms, interrupt latency management, and inter-process communication mechanisms, emphasizing their role in ensuring temporal correctness. A significant portion of the discussion focuses on the trade-offs between monolithic and microkernel architectures, particularly regarding memory footprint and system reliability. By evaluating various commercial and open-source RTOS solutions, the research demonstrates how hardware-software co-design can mitigate the overhead typically associated with complex task synchronization. Ultimately, the paper argues that the future of embedded systems lies in adaptive RTOS frameworks that can dynamically balance power efficiency with the rigorous timing demands of Internet of Things (IoT) applications. This synthesis serves as a vital resource for engineers seeking to optimize system predictability in increasingly heterogeneous computing environments, ensuring that software responses remain consistent under peak load conditions.

Daily Tech Digest - January 23, 2026


Quote for the day:

"Strong convictions precede great actions." -- James Freeman Clarke



90% of companies are woefully unprepared for quantum security threats

Companies shouldn't wait, Bain warned, pointing to rapid progress made by IBM, Google, and other industry leaders on this front. "At a certain threshold, quantum computing will be able to easily and quickly break asymmetric cryptography protocols such as Rivest-Shamir-Adelman (RSA), Diffie-Hellman (DH), and elliptic-curve cryptography (ECC) and reduce the time required, weakening symmetric cryptography such as advanced encryption standard (AES) and hashing functions," ... The highest impact will be on secure keys and tokens, digital certificates, authentication protocols, data encrypted at rest, and even network security and identity access management (IAM) tools. Essentially, anything currently relying on encryption. Beyond that, quantum computing could supercharge malware and make it easier to identify and weaponize "zero day" flaws, Bain warned. Another risk highlighted by security experts is "steal now, crack later" techniques, whereby threat actors harvest data now to decrypt later.  ... Companies need a board-led – and funded – roadmap to consider post-quantum risks across their business decision making, ensuring quantum resilience across their own suppliers, existing technology, and even their products. But so far, the Bain survey revealed only 12% of companies are considering quantum readiness as a key factor in procurement and risk assessments.


The New Rules of Work: What a global HR leader reveals about modern talent

The impact of AI on the workforce is a subject Sonia has thought deeply about, especially as it relates to entry-level talent. “There’s always been a question about repetitive engineering tasks—whether these should be done by engineers or by diploma holders. Now, with AI in the picture, many of these tasks will be automated,” she says. Rather than seeing this as a threat, Kutty believes it frees up human talent to focus on innovation and problem-solving. “Our true value at Quest Global comes from leveraging innovation to solve the toughest engineering problems. AI will allow us to do more of this meaningful work.” ... While the company offers AI-based courses and certifications, Kutty emphasises the importance of fostering a mindset of adaptability and systems thinking. “We call it nurturing ‘polymath engineers’—professionals who can think broadly, adapt to new challenges, and learn continuously,” she says. ... As the engineering and R&D sector prepares for rapid growth, Kutty identifies leadership development as her biggest challenge—and her greatest responsibility. “We need strong leaders who understand this industry and are ready to step up when the time comes. Planning for leadership succession keeps me up at night. It’s critical for our continued success.” On the other hand, client expectations have evolved alongside technological advances. “In the past, clients would tell us exactly what they wanted. Now, they expect us to tell them what’s possible with AI and technology. They see us as partners in innovation, not just service providers,” Kutty observes.


Work-from-office mandate? Expect top talent turnover, culture rot

There is value in cross-functional teams working together in person, says Lawrence Wolfe, CTO at marketing firm Converge. “When teams meet for architecture sessions, design sprints, or incident response, the pace of progress, as well as the level of clarity, may increase simply because being in-person caters to the way most people in the business interact,” he says. However, there are potential downsides for IT leaders, with strict work-from-office policies making it more difficult to attract and retain top IT talent. ... Despite possible resistance, it makes sense for some IT jobs to be tied to an office, says Lena McDearmid, founder and CEO of culture and leadership advisory firm Wryver. Some IT roles, including device provisioning, network operations, and conference room IT support, are better done in person, she notes. She sees some other benefits in specific situations. “In-person work is genuinely valuable for onboarding and mentoring early-career technologists, especially when learning how the organization actually operates, not just how the codebase works,” McDearmid says. “It’s also powerful when teams need to think together in high-bandwidth ways: whiteboards, war rooms, architecture reviews, incident response, or when solving messy, cross-functional problems.” ... IT leaders enforcing in-person work mandates can also focus on making the workplace a real place to collaborate, she adds. CIOs can align office space, meeting schedules, and in-office days so they reinforce the goals of collaboration and knowledge sharing, Wettemann adds.


Rethinking IT leadership to unlock the agility of ‘teamship’

Rather than waiting for the leader to set the pace, the best teams coach one another, challenge one another, co-elevate one another, and move faster, because they and their leaders have built cultures where candor is a shared responsibility. For CIOs navigating the messy middle of AI, modernization, and talent transformation, this shift from leadership to what Ferrazzi calls “teamship” may be the most important upgrade of all. ... The No. 1 shift is to move from leadership to teamship. That means stop thinking of leadership as a hub and spoke. Don’t think aboutwhat you need to give feedback on, how you need to hold people accountable, how you need to do this or that. Instead, think about, how do you get your team to step up and meet each other, to give each other feedback, to hold each other’s energy up. Get out of the center and expect your team to step up. ... To be effective, stress testing needs to be positioned as a service to the person who’s giving the project update. We’re not trying to make them look bad or catch them in what they’re doing wrong. The feedback should be offered and received as data, with no presumption that they have to act on it. ... That fear is rooted in a misunderstanding of how high-performing teams actually work. In traditional leadership models, accountability flows upward: People worry about what the boss will think. In teamship, accountability flows sideways: People worry about letting their peers down.


The Upside Down is Real: What Stranger Things Teaches Us About Modern Cybersecurity

The Upside Down’s danger lies in the unseen portals – the gates and rifts – that allow its monstrous inhabitants, like the Demogorgon and the Mind Flayer, to cross over and wreak havoc in the seemingly safe, familiar world of Hawkins. Today, nearly every business’s hidden reality is its extended attack surface. It’s the sprawling, complex, and often unmanaged network of IT, OT, IoT, medical, cloud systems and beyond that modern organizations rely on. ... For the CISO and security team, this translates directly to the need for full, continuous visibility across every single connected device and system to protect the entire attack surface and manage their organization’s cyber risk exposure in real time. Like the Dungeons and Dragons analogies the kids use to understand the creatures and their tactics, security teams rely on context and intelligence – risk scoring, vulnerability prioritization, and threat analysis – to understand how an asset is connected, why it is vulnerable, and what the most effective countermeasure is. ... First and foremost, cybersecurity requires teamwork, particularly through the fusion of IT, OT, security and business leadership so that they work from a unified view of any risks at hand. It also demands persistence from the dedicated security professionals protecting our digital infrastructure. Most of all, cybersecurity needs to be a proactive and preemptive effort where risk exposures are continuously monitored and threats can be stopped before they ever fully manifest.


Shadow AI: The emerging enterprise risk that can no longer be ignored

With regulatory frameworks tightening and emerging national standards, unsanctioned AI activity can quickly become a governance liability. Instead of reactive controls, organisations are now moving toward multi-layered visibility frameworks: monitoring external AI calls, classifying enterprise assets by sensitivity and tracking unmanaged AI usage. Forward-looking teams are even translating these metrics into financial exposure scores, linking AI misuse to operational, reputational and regulatory impact. Assigning monetary value to Shadow AI risk has proven effective for prioritising mitigation at leadership levels. ... A structured foundation is essential, comprised of trusted assessment frameworks, tested architectural blueprints and scalable AI operating models. Some organisations are pairing these with comprehensive training programs to build AI-literate leaders and teams, ensuring governance evolves alongside capability. This reflects a broader shift: responsible AI has now become the foundation of durable competitive advantage. ... Regulators, global partners and enterprise clients are seeking evidence of formal AI governance models, not just intent. For example, as per the Digital India Act, sectoral data localisation rules and global regulatory momentum are prompting enterprises to strengthen AI auditability, model documentation and workforce training. For many organisations, AI governance has moved from an operational task to a board-level agenda. 


Ireland to make age checks through government app mandatory for social media

The plan is unprecedented among governments legislating online safety, in that it makes downloading the app, designed by the Government’s chief information officer, mandatory for age assurance. Per the Extra report, “if adults refuse to download the digital wallet, they will no longer be able to access their existing social media accounts.” “Mr. O’Donovan said the process of downloading the app might inconvenience someone for ‘three or four minutes’ but this was a small ask in order to protect children online.” O’Donovan has called the harmful effects of social media and other online content on youth a “severe public health issue.” ... Concerns about age assurance technology persist among privacy rights activists. Since age verification and facial age estimation often involves the processing of biometrics, the potential for sensitive data to be exposed is high. And requiring the process to run through a government product is likely to agitate fears about mass surveillance. O’Donovan says the risk to Ireland’s youth is higher. ... “At the end of the day, if the companies have a social conscience and are interested in the protection of children online, I don’t see why anybody who wouldn’t be trading in Ireland, not just domiciled in Ireland, wouldn’t adopt the format that we’re proposing,” he says. “Some of them do have, you know, something bordering on a social conscience, which is to be welcomed. But ­others don’t.”


Secure networking: the foundation for the AI era

Global networks have been under siege for years, but recent attacks are more sophisticated and move at unprecedented speed. Many organizations are still relying on outdated infrastructure, with Cisco research revealing that 48% of network assets worldwide are aging or obsolete. This creates vulnerabilities that attackers eagerly exploit. It’s no longer enough to patch and maintain; a fundamental shift in strategy is required. ... Modern networks typically span solutions and services from a range of different vendors, creating layers of complexity that can quickly overwhelm even experienced IT teams. This complexity often translates into vulnerability, especially when secure configurations aren’t consistently implemented or maintained. For many, simplicity and automation are now mission critical. Businesses increasingly need networks where secure configurations, protocols, and features are enabled by default and adapt automatically. ... Organizations now face the challenge of not only detecting threats quickly, but also responding before vulnerabilities can be exploited. There is an urgent need to reduce the attack surface, remove legacy insecure features, and introduce advanced capabilities for detection and response. ... The next generation of security requires networks to seamlessly provide identity management, deep visibility, integrated detection and protection, and streamlined management, while also incorporating advanced technologies like post-quantum cryptography. 


Ransomware gang’s slip-up led to data recovery for 12 US firms

Researchers at Florida-based Cyber Centaurs said Thursday they took advantage of a lapse in operational security by the gang: They found artifacts left behind by Restic, an legitimate open source backup utility the gang uses to encrypt and exfiltrate victim data into cloud storage environments it controls. Assuming the gang regularly re-uses Restic-based infrastructure led to finding an unnamed cloud storage provider where stolen data was dumped. ... While Restic wasn’t used for exfiltration in this particular attack, Cyber Centaurs suspected the gang regularly used it, based on patterns seen in other incidents. It also suspected the infrastructure the crooks used was unlikely to be dismantled even after negotiations ended or payments were made by corporate victims. With that in mind, the incident response team developed a custom enumeration script to identify certain patterns that identify S3-style cloud bucket infrastructure that the stolen data might be going to. The script ran through a curated list of candidate repository identifiers derived from previously observed Restic artifacts. For each candidate, environment variables were set to match the configuration style used by the threat actor, including the repository endpoint and encryption password. Restic was then instructed to list available snapshots in a structured format, enabling investigators to analyze results without interacting with the underlying data.


The Real Attack Surface Isn’t Code Anymore — It’s Business Users

Traditional AppSec programs are optimized for code stored in repositories, pushed through pipelines, and deployed through CI/CD, not for no-code apps, connectors, and automations created on platforms like Power Platform, ServiceNow, Salesforce, and UiPath. Meanwhile, most organizations assume business-user automations are simple, low-risk, and limited in scope. The reality is more complex. Citizen developers now outnumber traditional software developers by an order of magnitude. Plus, they are wiring together data sources, triggering multi-system workflows, and calling APIs, not just building basic macros or departmental utilities. Because these automations are created outside engineering governance, traditional monitoring tools never see them. ... What emerges is a shadow layer of business logic that sits entirely outside the boundaries of traditional AppSec, DevSecOps, and identity programs. As long as ownership remains fragmented and discovery elusive, security debt continues to grow unchecked. ... We’re entering an era where the most dangerous vulnerabilities aren’t in the code AppDev teams write, but in the thousands of workflows and automations business users build on their own. The sooner organizations recognize and confront the invisible no-code estate, the faster they can reduce the security debt accumulating inside their infrastructure.

Daily Tech Digest - April 17, 2025


Quote for the day:

"We are only as effective as our people's perception of us." -- Danny Cox



Why data literacy is essential - and elusive - for business leaders in the AI age

The rising importance of data-driven decision-making is clear but elusive. However, the trust in the data underpinning these decisions is falling. Business leaders do not feel equipped to find, analyze, and interpret the data they need in an increasingly competitive business environment. The added complexity is the convergence of macro and micro uncertainties -- including economic, political, financial, technological, competitive landscape, and talent shortage variables.  ... The business need for greater adoption of AI capabilities, including predictive, generative and agentic AI solutions, is increasing the need for businesses to have confidence and trust in their data. Survey results show that higher adoption of AI will require stronger data literacy and access to trustworthy data. ... The alarming part of the survey is that 54% of business leaders are not confident in their ability to find, analyze, and interpret data on their own. And fewer than half of business leaders are sure they can use data to drive action and decision-making, generate and deliver timely insights, or effectively use data in their day-to-day work. Data literacy and confidence in the data are two growth opportunities for business leaders across all lines of business.


Cyber threats against energy sector surge as global tensions mount

These cyber-espionage campaigns are primarily driven by geopolitical considerations, as tensions shaped by the Russo-Ukraine war, the Gaza conflict, and the U.S.’ “great power struggle” with China are projected into cyberspace. With hostilities rising, potentially edging toward a third world war, rival nations are attempting to demonstrate their cyber-military capabilities by penetrating Western and Western-allied critical infrastructure networks. Fortunately, these nation-state campaigns have overwhelmingly been limited to espionage, as opposed to Stuxnet-style attacks intended to cause harm in the physical realm. A secondary driver of increasing cyberattacks against energy targets is technological transformation, marked by cloud adoption, which has largely mediated the growing convergence of IT and OT networks. OT-IT convergence across critical infrastructure sectors has thus made networked industrial Internet of Things (IIoT) appliances and systems more penetrable to threat actors. Specifically, researchers have observed that adversaries are using compromised IT environments as staging points to move laterally into OT networks. Compromising OT can be particularly lucrative for ransomware actors, because this type of attack enables adversaries to physically paralyze energy production operations, empowering them with the leverage needed to command higher ransom sums. 


The Active Data Architecture Era Is Here, Dresner Says

“The buildout of an active data architecture approach to accessing, combining, and preparing data speaks to a degree of maturity and sophistication in leveraging data as a strategic asset,” Dresner Advisory Services writes in the report. “It is not surprising, then, that respondents who rate their BI initiatives as a success place a much higher relative importance on active data architecture concepts compared with those organizations that are less successful.” Data integration is a major component of an active data architecture, but there are different ways that users can implement data integration. According to Dresner, the majority of active data architecture practitioners are utilizing batch and bulk data integration tools, such as ETL/ELT offerings. Fewer organizations are utilizing data virtualization as the primary data integration method, or real-time event streaming (i.e. Apache Kafka) or message-based data movement (i.e. RabbitMQ). Data catalogs and metadata management are important aspects of an active data architecture. “The diverse, distributed, connected, and dynamic nature of active data architecture requires capabilities to collect, understand, and leverage metadata describing relevant data sources, models, metrics, governance rules, and more,” Dresner writes. 


How can businesses solve the AI engineering talent gap?

“It is unclear whether nationalistic tendencies will encourage experts to remain in their home countries. Preferences may not only be impacted by compensation levels, but also by international attention to recent US treatment of immigrants and guests, as well as controversy at academic institutions,” says Bhattacharyya. But businesses can mitigate this global uncertainty, to some extent, by casting their hiring net wider to include remote working. Indeed, Thomas Mackenbrock, CEO-designate of Paris headquartered BPO giant Teleperformance says that the company’s global footprint helps it to fulfil AI skills demand. “We’re not reliant on any single market [for skills] as we are present in almost 100 markets,” explains Mackenbrock. ... “The future workforce will need to combine human ingenuity with new and emerging AI technologies; going beyond just technical skills alone,” says Khaled Benkrid, senior director of education and research at Arm. “Academic institutions play a pivotal role in shaping this future workforce. By collaborating with industry to conduct research and integrate AI into their curricula, they ensure that graduates possess the skills required by the industry. “Such collaborations with industry partners keep academic programs aligned with research frontiers and evolving job market demands, creating a seamless transition for students entering the workforce,” says Benkrid.


Breaking Down the Walls Between IT and OT

“Even though there's cyber on both sides, they are fundamentally different in concept,” Ian Bramson, vice president of global industrial cybersecurity at Black & Veatch, an engineering, procurement, consulting, and construction company, tells InformationWeek. “It's one of the things that have kept them more apart traditionally.” ... “OT is looked at as having a much longer lifespan, 30 to 50 years in some cases. An IT asset, the typical laptop these days that's issued to an individual in a company, three years is about when most organization start to think about issuing a replacement,” says Chris Hallenbeck, CISO for the Americas at endpoint management company Tanium. ... The skillsets required of the teams to operate IT and OT systems are also quite different. On one side, you likely have people skilled in traditional systems engineering. They may have no idea how to manage the programmable logic controllers (PLC) commonly used in OT systems. The divide between IT and OT has been, in some ways, purposeful. The Purdue model, for example, provides a framework for segmenting ICS networks, keeping them separate from corporate networks and the internet. ... Cyberattack vectors on IT and OT environments look different and result in different consequences. “On the IT side, the impact is primarily data loss and all of the second order effects of your data getting stolen or your data getting held for ransom,” says Shankar. 


Are Return on Equity and Value Creation New Metrics for CIOs?

While driving efficiency is not a new concept for technology leaders, what is different today is the scale and significance of their efforts. In many organizations, CIOs are being tasked with reimagining how value is generated, assessed and delivered. ... Traditionally, technology ROI discussions have focused on cost savings, automation consolidation and reduced headcount. But that perspective is shifting rapidly. CIOs are now prioritizing customer acquisition, retention pricing power and speed to market. CIOs also play an integral role in product innovation than ever before. To remain relevant, they must speak the language of gross margin, not just uptime. This evolution is increasingly reflected in boardroom conversations. CIOs once presented dashboards of uptime and service-level agreements, but today, they discuss customer value, operational efficiency and platform monetization. ... In some cases, technology leaders scale too quickly before proving value. For example, expensive cloud migrations may proceed without a corresponding shift in the business model. This can result in data lakes with no clear application or platforms launched without product-market fit. These missteps can severely undermine ROE. 


AI brings order to observability disorder

Artificial intelligence has contributed to complexity. Businesses now want to monitor large language models as well as applications to spot anomalies that may contribute to inaccuracies, bias, and slow performance. Legacy observability systems were never designed for the ability to bring together these disparate sources of data. A unified observability platform leveraging AI can radically simplify the tools and processes for improved visibility and resolving problems faster, enabling the business to optimize operations based on reliable insights. By consolidating on one set of integrated observability solutions, organizations can lower costs, simplify complex processes, and enable better cross-function collaboration. “Noise overwhelms site reliability engineering teams,” says Gagan Singh, Vice President of Product Marketing at Elastic. Irrelevant and low-priority alerts can overwhelm engineers, leading them to overlook critical issues and delaying incident response. Machine learning models are ideally suited to categorizing anomalies and surfacing relevant alerts so engineers can focus on critical performance and availability issues. “We can now leverage GenAI to enable SREs to surface insights more effectively,” Singh says.


Why Most IaC Strategies Still Fail — And How To Fix Them

There are a few common reasons IaC strategies fail in practice. Let’s explore what they are, and dive into some practical, battle-tested fixes to help teams regain control, improve consistency and deliver on the original promise of IaC. ... Without a unified direction, fragmentation sets in. Teams often get locked into incompatible tooling — some using AWS CloudFormation for perceived enterprise alignment, others favoring Terraform for its flexibility. These tool silos quickly become barriers to collaboration. ... Resistance to change also plays a role. Some engineers may prefer to stick with familiar interfaces and manual operations, viewing IaC as an unnecessary complication. Meanwhile, other teams might be fully invested in reusable modules and automated pipelines, leading to fractured workflows and collaboration breakdowns. Successful IaC implementation requires building skills, bridging silos and addressing resistance with empathy and training — not just tooling. To close the gap, teams need clear onboarding plans, shared coding standards and champions who can guide others through real-world usage — not just theory. ... Drift is inevitable: manual changes, rushed fixes and one-off permissions often leave code and reality out of sync. Without visibility into those deviations, troubleshooting becomes guesswork. 


What will the sustainable data center of the future look like?

The energy issue not only affects operators/suppliers. If a customer uses a lot of energy, they will get a bill to match, says Van den Bosch. “I [as a supplier] have to provide the customer with all kinds of details about my infrastructure. That includes everything from air conditioning to the specific energy consumption of the server racks. The customer is then able to reduce that energy consumption.” This can be done, for example, by replacing servers earlier than they have been before, a departure from the upgrade cycles of yesteryear. Ruud Mulder of Dell Technologies calls for the sustainability of equipment to be made measurable in great detail. This can be done by means of a digital passport, showing where all the materials come from and how recyclable they are. He thinks there is still much room for improvement in this area. For example, future designs can be recycled better by separating plastic and gold from each other, refurbishing components and more. This yield increase is often attractive, as more computing power is required for ambitious AI plans, and the efficiency of chips increases with each generation. “The transition to AI means that you sometimes have to say goodbye to your equipment sooner,” says Mulder. The AI issue is highly relevant to the future of the modern data center in any case. 


Fitness Functions for Your Architecture

Fitness functions offer us self-defined guardrails for certain aspects of our architecture. If we stay within certain (self-chosen) ranges, we're safe (our architecture is "good"). ... Many projects already use some kinds of fitness functions, although they might not use the term. For example, metrics from static code checkers, linters, and verification tools (such as PMD, FindBugs/SpotBugs, ESLint, SonarQube, and many more). Collecting the metrics alone doesn't make it a fitness function, though. You'll need fast feedback for your developers, and you need to define clear measures: limits or ranges for tolerated violations and actions to take if a metric indicates a violation. In software architecture, we have certain architectural styles and patterns to structure our code in order to improve understandability, maintainability, replaceability, and so on. Maybe the most well-known pattern is a layered architecture with, quite often, a front-end layer above a back-end layer. To take advantage of such layering, we'll allow and disallow certain dependencies between the layers. Usually, dependencies are allowed from top to down, i.e. from the front end to the back end, but not the other way around. A fitness function for a layered architecture will analyze the code to find all dependencies between the front end and the back end.

Daily Tech Digest - January 27, 2024

The future of biometrics in a zero trust world

Nearly one in three CEOs and members of senior management have fallen victim to phishing scams, either by clicking on the same link or sending money. C-level executives are the primary targets for biometric and deep fake attacks because they are four times more likely to be victims of phishing than other employees, according to Ivanti’s State of Security Preparedness 2023 Report. Ivanti found that whale phishing is the latest digital epidemic to attack the C-suite of thousands of companies. ... In response to the increasing need for better biometric security globally, Badge Inc. recently announced the availability of its patented authentication technology that renders personal identity information (PII) and biometric credential storage obsolete. Badge also announced an alliance with Okta, the latest in a series of partnerships aimed at strengthening Identity and Access Management (IAM) for their shared enterprise customers. Srivastava explained how her company’s approach to biometrics eliminates the need for passwords, device redirects, and knowledge-based authentication (KBA). Badge supports an enroll once and authenticate on any device workflow that scales across an enterprise’s many threat surfaces and devices. 


Understanding CQRS Architecture

CRUD and CQRS are both tactical patterns, concentrating on the implementation specifics at the level of individual services. Therefore, asserting that an organization relies entirely on a CQRS architecture may not be entirely accurate. While certain services may adopt this architecture, it is typical for other services to employ simpler paradigms. The entire organization may not adhere to a unified style for all problems. The CRUD architecture assumes the existence of a single model for both read and update operations. CRUD operations are typically linked with traditional relational database systems, and numerous applications adopt a CRUD-based approach for data management. Conversely, the CQRS architecture assumes the presence of distinct models for queries and commands. While this paradigm is more intricate to implement and introduces certain subtleties, it provides the advantage of enabling stricter enforcement of data validation, implementation of robust security measures, and optimization of performance. These definitions may appear somewhat vague and abstract at the moment, but clarity will emerge as we delve into the details. It's important to note here that CQRS or CRUD should not be regarded as an overarching philosophy to be blindly applied in all circumstances. 


Role of Wazuh in building a robust cybersecurity architecture

Wazuh is a free and open source security solution that offers unified XDR and SIEM protection across several platforms. Wazuh protects workloads across virtualized, on-premises, cloud-based, and containerized environments to provide organizations with an effective approach to cybersecurity. By collecting data from multiple sources and correlating it in real-time, it offers a broader view of an organization's security posture. Wazuh plays a significant role in implementing a cyber security architecture, providing a platform for security information and event management, active response, compliance monitoring, and more. It provides flexibility and interoperability, enabling organizations to deploy Wazuh agents across diverse operating systems. Wazuh is equipped with a File Integrity Monitoring (FIM) module that helps detect file changes on monitored endpoints. It takes this a step further by combining the FIM module with threat detection rules and threat intelligence sources to detect malicious files allowing security analysts to stay ahead of the threat curve. Wazuh also provides out-of-the-box support for compliance frameworks like PCI DSS, HIPAA, GDPR, NIST SP 800-53, and TSC. 


Budget cuts loom for data privacy initiatives

In addition to difficulty understanding the privacy regulatory landscape, organizations also face other data privacy challenges, including budget. 43% of respondents say their privacy budget is underfunded and only 36% say their budget is appropriately funded. When looking at the year ahead, only 24% say that they expect budget will increase (down 10 points from last year), and only one percent say it will remain the same (down 26 points from last year). 51% expect a decrease in budget, which is significantly higher than last year when only 12% expected a decrease in budget. For those seeking resources, technical privacy positions are in highest demand, with 62% of respondents indicating there will be increased demand for technical privacy roles in the next year, compared to 55% for legal/compliance roles. However, respondents indicate there are skills gaps among these privacy professionals; they cite experience with different types of technologies and/or applications (63%) as the biggest one. When looking at common privacy failures, respondents pinpointed the lack of or poor training (49%), not practicing privacy by design (44%) and data breaches (42%) as the main concerns.


How to become a Chief Information Security Officer

In general, the CISO position is well-paid. Due to high demand and a limited talent pool, top-tier CISOs have commanded salaries in excess of $2.3 million. Nonetheless, executive remuneration may vary based on industry, company size and specifics of a role. The CISO typically manages a team of cyber security experts (sometimes multiple teams) and collaborates with high-level business stakeholders to facilitate the strategic development and completion of cyber security initiatives. ... While experience in cyber security does count for a lot, and while smart and talented people do ascend to the CISO role without extensive formal schooling, it can pay to get the right education. Most enterprises will expect that a potential CISO have a bachelor’s degree in computer science (or a similar discipline). There are exceptions, but an undergraduate degree is often used as a credibility benchmark. ... When it comes to real-world experience, most CISO roles require a minimum of five years’ time spent in the industry. A potential CISO should maintain broad knowledge of a variety of platforms and solutions, along with a strong understanding of both cyber security history and modern day cyber security threats.


I thought software subscriptions were a ripoff until I did the math

Selling perpetual licenses means you get a big surge in revenue with each new release. But then you have to watch that cash pile dwindle as you work on the next version and try to convince your customers to pay for the upgrade. If you want the opportunity to continually improve your software, you need to bring in enough revenue each year to justify the time and resources you spend on the project. That's the difference between a sustainable business and a hobby. It strikes me that the real objection to software as a subscription isn't to the business model, but rather to the price. If you think a fair price for a piece of software is closer to $50 than $500, and you should be able to use it in perpetuity, you're telling the developer that you're willing to pay them no more than a few bucks a month. They're trying to tell you that's not enough to sustain a software business, and maybe you should try a free, open-source option instead. All the developers that are migrating to a cloud-based subscription model are taking a necessary step to help ensure their long-term survival. The challenge for companies playing in this space is to make it crystal clear that their subscriptions offer real value


Filling the Cybersecurity Talent Gap

Thankfully, there is a talented group in the veteran community ready and willing to meet the challenge. Through their unique skills, discipline, and unmatched experience, veterans are perfectly suited to help address the talent gap and growing cyber threats we face. Not only that, but veterans will find that IT and cybersecurity provide a second career as they transition out of their service. Veterans leave service with a wide range of talents that have several applications outside of the military. This includes both what are often called "soft skills," or those that are beneficial in a number of settings, as well as technical abilities well-suited for cybersecurity and IT. ... As the industry continues to incorporate more secure by design principles that guide how we approach security and cyber resiliency, we need a workforce that understands the importance of security and defense. To make this a reality, we need both the government and private companies to step up and create the right pathways for veterans to enter the workforce. This can include expanding the GI Bill to add additional incentives for careers in cybersecurity. Private companies should also offer more hands-on workshops and training that can both provide a way for applicants to learn and help companies fill their open positions.


How Much Architecture Is “Enough?”: Balancing the MVP and MVA Helps You Make Better Decisions

The critical challenge that the MVA must solve is that it must answer the MVP’s current challenges while anticipating but not actually solving future challenges. In other words, the MVA must not require unacceptable levels of rework to actually solve those future problems. Some rework is okay and expected, but the words "complete rewrite" mean that the architecture has failed and all bets on viability are off. As a result of this, the MVA hangs in a dynamic balance between solving future problems that may never exist, and letting technical debt pile up to the point where it leads to, metaphorically, architectural bankruptcy. Being able to balance these two forces is where experience comes in handy. ... The development team creates the initial MVA based on their initial and often incomplete understanding of the problems the MVA needs to solve. They will not usually have much in the way of QARs, perhaps only broad organizational "standards" that are more aspirational than accurate. These initial statements are often so vague as to be unhelpful, e.g. "the system must support very large numbers of concurrent users", "the system must be easy to support and maintain", "the system must be secure against external threats", etc.


Group permission misconfiguration exposes Google Kubernetes Engine clusters

The problem is that in most other systems “authenticated users” are users that the administrators created or defined in the system. This is also the case in privately self-managed Kubernetes clusters or for the most part in clusters set up on other cloud services providers such as Azure or AWS. So, it’s not hard to see how some administrators might conclude that system:authenticated refers to a group of verified users and then decide to use it as an easy method to assign some permissions to all those trusted users. “GKE, in contrast to Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS), exposes a far-reaching threat since it supports both anonymous and full OpenID Connect (OIDC) access,” the Orca researchers said. “Unlike AWS and Azure, GCP’s managed Kubernetes solution considers any validated Google account as an authenticated entity. Hence, system:authenticated in GKE becomes a sensitive asset administrators should not overlook.” The Kubernetes API can integrate with many authentication systems and since access to Google Cloud Platform and all of Google’s services in general is done through Google accounts, it makes sense to also integrate GKE with Google’s IAM and OAuth authentication and authorization system.


Will the Rise of Generative AI Increase Technical Debt?

The rise of generative AI-related tools will likely increase technical debt, both due to the rush to hastily adopt new capabilities and the need to mold AI models to suit specific requirements. “New LLMs and generative AI applications will undoubtedly increase technical debt in the future, or at a minimum, greatly increase the need to manage that debt proactively,” said Quillin. “It starts with new requirements to continually manage, maintain, and nurture these models from a broad range of new KPIs from bias, concept drift, and shifting business, consumer, and environmental inputs and goals,” he said. Incorporating AI may require a significant upfront commitment, leading to additional technical debt. “It won’t be just a build-and-maintain scenario, but rather, the first of many steps on a long road ahead,” said Prince Kohli, CTO of Automation Anywhere. Product companies with a generative AI focus must invest in creating a data and model strategy, a data architecture to work with AI, controls for the AI and more. “Technology disruptions and pivots such as this always lead to this kind of technical debt that must be continually paid down, but it’s the price of admittance,” he said.



Quote for the day:

''The best preparation for tomorrow is doing your best today.'' -- H. Jackson

Daily Tech Digest - February 28, 2023

Does the Future of Work include Network as a Service (NaaS)?

NaaS enables companies to implement a network infrastructure that will evolve with time, providing the flexibility to adapt to business needs as time evolves. With NaaS, companies can focus on business outcomes and service level objectives for their network and the accessibility required for their community of workers, partners, and customers. NaaS eliminates organizations having to worry about keeping up with the pace of technology change by relying on the strength and expertise of their implementation partner. NaaS eliminates large upfront capital expenditure investments that often go into new network infrastructure design, planning, and implementation with a monthly subscription-based or flexible consumption model, alleviating the financial impact on rebuilding a new workplace environment. NaaS enables more flexibility by not tying the organization down to specific hardware or capital investments that may eventually become obsolete.


How Technical Debt Hampers Modernization Efforts for Organizations

“When you develop an application, you take certain shortcuts for which you're going to have to pay the price back later on,” explains Olivier Gaudin, cofounder and CEO of SonarSource, which develops open-source software for continuous code quality and security. “You accept that your code is not perfect. You know that it will have a certain cost when you come back to it later. It will be a bit more difficult to read, to maintain or to change.” ... Experts note the patience and long-term strategy required to overcome technical debt. “It’s a matter of focusing on longer-term strategy over short-term financial goals,” Orlandini says. “Unfortunately for Southwest, the issues were well-known. However, the business as a whole did not have the will or motivation to invest in fixing it until it was too late. They are an extreme example but serve as a very valid case in point of what can happen if you do not understand the issues and the ultimate repercussions of not investing to avoid a meltdown, in whatever form that would take for each organization.”


Just how big is this new generative AI? Think internet-level disruption

While resources and information availability increased by an unprecedented degree, so too did misinformation, scams, and criminal activity. One of the biggest problems with ChatGPT is that it presents completely wrong information as eloquently and confidently as it presents accurate information. Unless requested, it doesn't provide sources or cite where that information came from. Because it aggregates a tremendous amount of free-form information, it's often impossible to trace how it comes by its knowledge and assertions. This makes it ripe for corruption and gaming. At some point, AI designers will need to open their systems to the broader internet. When they do, oh boy, it's going to be rough. Today, there are entire industries dedicated to manipulating Google search results. I'm often required by clients to put my articles through software applications that weigh each word and phrase against how much Google oomph it produces, and then I'm asked to change what I write to appeal more to the Google algorithms.


Is blockchain really secure? Here are four pressing cyber threats you must consider

Blockchains use consensus protocols to reach agreement among participants when adding a new block. Since there is no central authority, consensus protocol vulnerabilities threaten to control a blockchain network and dictate its consensus decisions from various attack vectors, such as the majority (51%) and selfish mining attacks. ... The second threat is related to the exposure of sensitive and private data. Blockchains are transparent by design, and participants may share data that attackers can use to infer confidential or sensitive information. As a result, organizations must carefully evaluate their blockchain usage to ensure that only permitted data is shared without exposing any private or sensitive information. ... Attackers may compromise private keys to control participants’ accounts and associated assets by using classical information technology methods, such as phishing and dictionary attacks, or by exploiting vulnerabilities in blockchain clients’ software.


Behaviors To Avoid When Practicing Pair Programming

Despite its popularity, pair programming seems to be a methodology that is not wildly adopted by the industry. When it is, it might vary on what "pair" and "programming" means given a specific context. Sometimes pair programming is used in specific moments throughout the day of practitioners, as reported by Lauren Peate on the podcast Software Engineering Unlocked hosted by Michaela Greiler to fulfill specific tasks. But, in the XP, pair programming is the default approach to developing all the aspects of the software. Due to the variation and interpretation of what pair programming is, companies that adopt it might face some miss conceptions of how to practice it. Often, this is the root cause of having a poor experience while pairing.Lack of soft (social) skills ... The driver and navigator is the style that requires the pair to focus on a single problem at once. Therefore, the navigator is the one that should give support and question the driver's decisions to keep both in sync. When it does not happen, the collaboration session might suffer from a lack of interaction between the pair. 


When it comes to network innovation, we must protect the data ‘pipes’

We must conclude that any encrypted information collected by foreign intelligence services will eventually be cracked through sufficient compute power and time. This is one reason why super computers are part of the race for information dominance. At the level of supercomputers, the amount of compute is truly calculated in cost to build and cost to operate. If you do not have access to cutting edge chips, just increase the number of compute chips, central processing unit or graphics processing unit, or some other compute unit like an AI accelerator. It will cost more to make and cost more electricity to operate, but the amount of compute will be available to the government or corporation that invested in the system. Without a true “zero trust” scheme, any compromise of any node on any network becomes a pivot point for further attacks. The problem with “zero trust” is that to be effective, you need a mature network model that can be secured, not a “growing, organic network” that is adapting rapidly to meet the needs of the user.


Unstructured data and the storage it needs

As we’ve seen, unstructured data is more or less defined by the fact it is not created by use of a database. It may be the case that more structure is applied to unstructured data later in its life, but then it becomes something else. ... It’s quite possible to build adequately performing file and object storage on-site using spinning disk. At the capacities needed, HDD is often the most economic option.But advances in flash manufacturing have led to high-capacity solid state storage becoming available, and storage array makers have started to use it in file and object storage-capable hardware. This is QLC – quad-level cell – flash. This packs in four levels of binary switches to flash cells to provide higher storage density and so lower cost per GB than any other flash commercially usable currently. The trade-offs that come with QLC, however, are that flash lifetime can be compromised, so it’s better suited to large-capacity, less frequently accessed data. But the speed of flash is particularly well-suited to unstructured use cases, such as in analytics where rapid processing and therefore I/O is needed


The Cybersecurity Hype Cycle of ChatGPT and Synthetic Media

Historically, spearphishing messages have been partially or entirely crafted by people. However, synthetic chat makes it possible to automate this process – and highly advanced synthetic chat, like ChatGPT, makes these messages seem just as, or more convincing, than a human-written message. It also opens the door for automated, interactive malicious communications. With this in mind, threat actors can quickly and cheaply massify high-cost and highly effective approaches like spearphishing. These capabilities could be used to support cybercrime, nation-state operations and more. Advances like ChatGPT may also have a meaningful impact on information operations, which have come to the forefront due to foreign influence in recent US presidential elections. Technologies such as ChatGPT can generate lengthy, realistic content supporting divisive narratives, which could help scale up information operations.


How to de-risk your digital ecosystem

In short, in any de-risking framework, one must assume that the largest source of cyberthreats comes not from someone breaking in, but rather from a door left open for an uninvited guest. Organizations must adapt their mindset, their processes, and their resources accordingly. ... In many organizations, the responsibility for closing risk gaps falls to several leaders, but not to a single point of authority. The failure is understandable as digital ecosystems touch multiple dimensions of an enterprise. But then responsibility for the total risk environment and de-risking is shared — though not necessarily met. A lack of accountability results in a lack of power to act and set de-risking as a priority within the organization. ... Without understanding the context of the business, understanding and remediating risk is difficult to do effectively. For example, an outside vendor can be a potential source of risk but also plays a critical and central role in the business. Resolving and mitigating the issue may require special handling and attention.


Closing the Cybersecurity Talent Gap

Cybersecurity is often viewed as just another technical talent field, yet candidates are expected to possess a wide range of rapidly evolving knowledge and skills. When filling staffing gaps, leaders should examine the skill sets that are missing from their current team, such as creative problem solving, stakeholder communications, buy-in development, and change enablement. “Look for candidates who will help balance out existing team skills as opposed to individuals who match a specific technical qualification,” Glair says. Before hiring can begin, it's necessary to attract suitable candidates. Initial search steps should include website updates and social media posts, Glair says. He also suggests creating an internal “cybersecurity academy” that will build talent from within the organization. “This should include the technical, process, communications, and leadership skills needed to address today’s cybersecurity challenges,” Glair notes. Burnet recommends sponsoring a “sourcing jam.” “That means getting recruiters and/or hiring managers in a room together ... to trawl through their networks and get them to personally reach out.”



Quote for the day:

"Leaders are the ones who keep faith with the past, keep step with the present, and keep the promise to posterity." -- Harold J. Seymour