Showing posts with label ShiftLeft. Show all posts
Showing posts with label ShiftLeft. Show all posts

Daily Tech Digest - March 06, 2026


Quote for the day:

"Actions, not words, are the ultimate results of leadership." -- Bill Owens



Strategy fails when leaders confuse ambition with readiness

This article explores why bold corporate transformations often falter despite having sound strategic logic. The core issue lies in leaders mistakenly treating clear intent as a proxy for the actual capacity to change. While ambition is highly visible in presentations and public goals, organizational readiness—comprising internal skills, trust, and execution muscle—exists beneath the surface and is built slowly over time. When leadership pushes initiatives significantly faster than the organization can absorb them, it creates a "readiness gap" characterized by deep change fatigue, performative work, and eroding employee belief. Pushing harder in response often exacerbates the problem, as what looks like resistance is frequently just mental exhaustion from reaching a finite capacity for change. To succeed, leaders must treat readiness as a dynamic leadership discipline rather than a minor operational detail. This involves making difficult strategic tradeoffs, prioritizing the careful sequencing of projects, and investing in internal capabilities before attempting to scale. Ultimately, effective strategy is not just about choosing a direction but about mastering timing; true progress depends less on the volume of projects launched and more on the organization’s ability to internalize new behaviors. By bridging the gap between vision and preparedness, leaders can transform high-level ambition into sustainable, long-term impact.


Why Calm Leadership Is A Strategic Advantage In High-Risk Technology

In the Forbes article Justin Hertzberg argues that composure is not just a personality trait but a vital strategic capability for managing modern technical infrastructure. While the myth of the high-intensity executive persists, Hertzberg suggests that in sectors like AI and cybersecurity, the ability to remain steady under pressure is a fundamental form of operational risk management. This calm approach preserves cognitive bandwidth, ensuring that decision-making remains structured and analytical rather than reactive or impulsive. A critical component of this leadership style is the cultivation of psychological safety; by responding with curiosity instead of emotion, leaders encourage teams to surface small technical anomalies early, preventing them from escalating into catastrophic failures. Furthermore, calm leadership acts as a force multiplier for clarity, converting complex technical signals into actionable priorities and consistent communication rhythms. This steadiness also supports human resilience, recognizing that human operators are just as essential to system stability as the hardware and software they manage. Ultimately, Hertzberg concludes that composure is a skill that can be trained through simulation and culture. As technology becomes more interconnected, the most significant competitive edge is a leader who provides a "quiet advantage"—the discipline to stay focused when uncertainty is at its peak.


AI fraud pushing pace on need for advanced deepfake detection tools

The article highlights the urgent need for advanced deepfake detection tools as generative AI accelerates fraud capabilities, forcing organizations to reevaluate their security frameworks. Dr. Edward Amoros emphasizes that deepfake protection should be viewed as a high-ROI investment rather than an experimental control, urging Chief Information Security Officers to integrate these threats into existing risk registers like FAIR or ISO/IEC 27005. By reframing deepfakes as identity-based loss events, executives can justify the relatively modest costs of detection platforms compared to the massive financial and reputational damage of successful attacks. However, a significant "readiness gap" persists; research from DataVisor indicates that while 74 percent of financial leaders recognize AI-driven fraud as a primary threat, 67 percent still lack the necessary infrastructure to deploy effective defenses. This vulnerability is further compounded by the rapid evolution of vocal cloning, which a paper from the Bloomsbury Intelligence and Security Institute warns could soon render traditional voice biometrics obsolete. To counter these risks, the article advocates for a shift toward identity authenticity as a measurable control objective, utilizing specific metrics such as detection accuracy and response times. Ultimately, sustaining trust in digital identities requires a transition from legacy operational speeds to real-time, AI-powered defensive strategies.


Autoscaling Is Not Elasticity

In the DZone article David Iyanu Jonathan argues that while these terms are often used interchangeably, they represent fundamentally different concepts in cloud system design. Autoscaling is a reactive, algorithmic mechanism that adjusts resource counts based on specific metrics, whereas true elasticity is a resilient architectural property that allows a system to absorb load gracefully without collapsing. The author warns that "mindless" autoscaling—driven by single metrics like CPU usage without hard caps—can actually exacerbate failures, such as when a cluster scales up during a DDoS attack or saturates a downstream database like Redis, leading to cascading outages and astronomical cloud bills. To achieve genuine elasticity, organizations must implement sophisticated guardrails, including hard instance caps to protect downstream dependencies, longer cooldown periods to prevent resource oscillation, and composite triggers that monitor request rates and error percentages alongside traditional utilization signals. Furthermore, the article emphasizes the necessity of dependency health gates, manual override procedures, and cost circuit breakers to ensure operational stability. Ultimately, Jonathan posits that resilience is born from policy and testing rather than blind algorithmic faith; true elasticity requires a deep understanding of system bottlenecks and the discipline to prioritize long-term stability through proactive chaos drills and rigorous policy audits.


Meet Your New Colleague: What OpenClaw Taught Me About the Agentic Future

This blog post by Jon Duren explores the transformative impact of OpenClaw, an open-source project that has catalyzed the transition from conversational chatbots to autonomous "agentic" AI. Unlike traditional AI assistants that merely respond to prompts, OpenClaw demonstrates a system capable of assuming specific roles, maintaining deep context, and executing complex tasks using diverse digital tools. This shift represents a move toward AI as a functional "colleague" rather than just a software utility. Duren emphasizes that while OpenClaw is currently a rough proof-of-concept, its viral success has signaled a massive market appetite, prompting major foundation labs to accelerate their development of enterprise-grade agentic platforms. For organizations, this evolution necessitates immediate strategic preparation, particularly regarding robust data infrastructure and governance frameworks to ensure these autonomous agents operate within safe guardrails. The author argues that we are witnessing the start of an "AI Flywheel" effect, where early experimentation leads to compounding competitive advantages. Ultimately, the piece suggests that the future of work involves integrating these proactive agents into human teams, transforming repetitive, context-heavy workflows into streamlined processes. Leaders must develop a deep understanding of this agentic potential now to navigate an era where AI effectively functions as a productive team member.


Why digital identity is the new perimeter in a zero-trust world

In the contemporary cybersecurity landscape, the traditional network firewall has transitioned from a definitive security seal to an obsolete relic, replaced by digital identity as the primary perimeter. As organizations embrace cloud-first strategies and remote work, data is no longer confined to physical boundaries, necessitating a Zero Trust approach centered on the mantra of "never trust, always verify." Given that approximately 80% of breaches involve stolen credentials, robust Identity and Access Management (IAM) is now a strategic imperative for maintaining system integrity. This framework relies on continuous authentication and adaptive signals—such as real-time location and biometrics—to monitor risks dynamically rather than relying on static passwords. The scope of identity has also expanded significantly to include machine identities, including IoT devices and APIs, which currently outnumber human users and require automated governance to prevent unauthorized access. Furthermore, while artificial intelligence facilitates sophisticated fraud, it simultaneously empowers defenders with predictive anomaly detection and risk-based access controls. By centralizing authentication and automating the lifecycle management of both human and non-human accounts, organizations can effectively mitigate human error and ensure compliance. Ultimately, treating digital identity as the new perimeter is the only viable method to secure modern digital transformations against the evolving complexities of the current global threat landscape.


State-affiliated hackers set up for critical OT attacks that operators may not detect

Research from industrial cybersecurity firm Dragos reveals a dangerous shift in nation-state cyber strategy, as state-affiliated threat groups move beyond mere network access to actively mapping methods for disrupting physical industrial processes. Groups like China-linked Voltzite and Russia-linked Electrum are now weaponizing operational technology (OT) access to identify specific conditions that can trigger process shutdowns or destroy physical infrastructure. For instance, Voltzite has been observed manipulating engineering workstations within U.S. energy and pipeline networks, while Russian actors have expanded their destructive operations into NATO territory. Despite these escalating threats, critical infrastructure operators remain alarmingly unprepared. Dragos reports that fewer than 10% of OT networks worldwide have adequate security monitoring, and a staggering 90% of asset owners still lack the visibility to detect techniques used in the Ukraine power grid attacks a decade ago. This lack of oversight is compounded by poor network segmentation and a reliance on internet-facing devices with default credentials. Consequently, many breaches are only discovered when operators notice physical malfunctions rather than through automated alerts. As attackers deploy sophisticated wiper malware and corrupt device firmware, the inability of many organizations to detect, contain, or respond to these intrusions poses a significant risk to global industrial stability and public safety.


The Coruna exploit: Why iPhone users should be concerned

The Coruna exploit represents a significant escalation in mobile security threats, illustrating how sophisticated, state-grade hacking tools can eventually filter down into the hands of mass-scale cybercriminals. Discovered by Google’s Threat Intelligence Group and iVerify, Coruna is a highly polished exploit kit capable of hijacking iPhones running iOS 13 through iOS 17.2.1 simply when a user visits a malicious website. This complex suite utilizes twenty-three distinct vulnerabilities and five exploit chains to grant attackers root access, allowing them to exfiltrate sensitive data, including text snippets and cryptocurrency information. Evidence suggests the software may have originated from a U.S. government contractor before being utilized by various nation-state actors from Russia and China, and ultimately criminal organizations. Notably, the malware is advanced enough to detect and cease operations if an iPhone’s Lockdown Mode is active, highlighting the effectiveness of Apple’s specialized security features. While Apple has addressed these vulnerabilities in recent updates such as iOS 26, thousands of users remain at risk due to slow adoption rates for new operating systems. The proliferation of Coruna serves as a stark reminder that digital backdoors and weaponized exploits, once created, inevitably escape state control and threaten the privacy and security of ordinary citizens worldwide.


Digital sovereignty options for on-prem deployments

Digital sovereignty is rapidly evolving from a compliance requirement into a fundamental architectural necessity for global enterprises seeking to maintain absolute control over their data and infrastructure. As highlighted in the linked article, the shift away from standard public cloud services is being driven by stringent regional regulations and geopolitical concerns regarding unauthorized data access by foreign governments. To address these challenges, major technology providers like Cisco, IBM, Fortinet, and Versa Networks have introduced sophisticated on-premises and air-gapped solutions. Cisco’s Sovereign Critical Infrastructure portfolio emphasizes physical isolation and customer-controlled licensing, while IBM’s Sovereign Core focuses on securing the AI lifecycle through transparent, architecturally-enforced platforms like Red Hat OpenShift. Additionally, SASE leaders Fortinet and Versa are offering sovereign versions of their networking stacks, allowing organizations to manage security policies and data flows within their own jurisdictions. These localized deployment options provide essential safeguards for regulated sectors like government and finance, ensuring that the control plane, encryption keys, and AI inference remain entirely within the organization’s legal and physical boundaries. Ultimately, achieving true digital sovereignty requires balancing the benefits of modern cloud agility with the rigorous oversight provided by dedicated, premises-based hardware and software frameworks. By embracing these models, businesses can navigate global complexities securely.


Shift Left Has Shifted Wrong: Why AppSec Teams – Not Developers – Must Lead Security in the Age of AI Coding

The article by Bruce Fram argues that the traditional "narrow" shift-left security model—where developers are tasked with finding and fixing individual vulnerabilities—has fundamentally failed, particularly in the escalating era of AI-generated code. Fram highlights a staggering 67% increase in CVEs since 2023, noting that developers are primarily incentivized to ship features rather than master complex security nuances. This challenge is compounded by AI assistants; nearly 25% of AI-generated code contains security flaws, and as developers transition into "agent managers" who orchestrate multiple AI tools, the volume of vulnerabilities becomes unmanageable for manual human review. To address this, Fram posits that Application Security (AppSec) teams, rather than developers, must take the lead. Instead of merely reporting findings, AppSec professionals should transform into security automation engineers who utilize AI-driven tools to triage findings and automatically generate verified code fixes. In this refined workflow, developers simply review automated pull requests to ensure functional integrity. Ultimately, the piece contends that organizations must move beyond the unrealistic expectation of developer-led security, embracing automated remediation to maintain pace with the rapid, AI-driven development lifecycle and reduce the growing enterprise vulnerability backlog effectively.

Daily Tech Digest - December 27, 2025


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



Leading In The Age Of AI: Five Human Competencies Every Modern Leader Needs

Leaders are surrounded by data, metrics and algorithmic recommendations, but decision quality depends on interpretation rather than volume. Insight is the ability to turn information and diverse perspectives into clarity. It requires curiosity, patience and the humility to question assumptions. Leaders who demonstrate this capability articulate complex issues clearly, invite dissent before deciding and translate analysis into meaningful direction. ... Integration is the capability to design environments where human creativity and machine intelligence reinforce one another. Leaders strong in this capability align technology with purpose and culture, encourage experimentation and ensure that tools enhance human capability rather than replacing reflection and judgment. The aim is capability at scale, not efficiency at any cost. ... Inspiration is the ability to energize people by helping them see what is possible and how their work contributes to a larger purpose. It is grounded optimism rather than polished enthusiasm. Leaders who inspire use story, clarity and authenticity to create shared commitment rather than simple compliance. When purpose becomes personal, contribution follows. ... It is not only about speed or quarterly numbers. It is about sustainable value for people, organizations and society. Leaders strong in this capability balance performance with well-being and growth, adapt strategy based on real feedback and design systems that strengthen capacity over time instead of exhausting it.


Big shifts that will reshape work in 2026

We’re moving into a new chapter where real skills and what people can actually do matter more than degrees or job titles. In 2026, this shift will become the standard across organisations in APAC. Instead of just looking for certificates, employers are now keen to find people who can show adaptability, pick up new things quickly, and prove their expertise through action. ... as helpful as AI can be, there’s a catch. Technology can make things faster and smarter, but it’s not a substitute for the human touch—creativity, empathy, and making the right call when it matters. The real test for leaders will be making sure AI helps people do their best work, not strip away what makes us human. That means setting clear rules for how AI is used, helping employees build digital skills, and keeping trust at the centre of it all. Organisations that succeed will strike a balance: leveraging AI’s analytical power to unlock efficiencies, while empowering people to focus on the relational, imaginative, and moral dimensions of work. ... Employee wellbeing is set to become the foundation of the future of work. No longer a peripheral benefit or a box to check, wellbeing will be woven into organisational culture, shaping every aspect of the employee experience. ... Purpose is emerging as the new currency of talent attraction and retention, particularly for Gen Z and millennials, who are steadfast in their desire to work for organisations that reflect their personal values. 


How AI could close the education inequality gap - or widen it

On one side are those who say that AI tools will never be able to replace the teaching offered by humans. On the other side are those who insist that access to AI-powered tutoring is better than no access to tutoring at all. The one thing that can be agreed on across the board is that students can benefit from tutoring, and fair access remains a major challenge -- one that AI may be able to smooth over. "The best human tutors will remain ahead of AI for a long time yet to come, but do most people have access to tutors outside of class?" said Mollick. To evaluate educational tools, Mollick uses what he calls the "BAH" test, which measures whether a tool is better than the best available human a student can realistically access. ... AI tools that function like a tutor could also help students who don't have the resources to access a human tutor. A recent Brookings Institution report found that the largest barrier to scaling effective tutoring programs is cost, estimating a requirement $1,000 to $3,000 per student annually for high-impact models. Because private tutoring often requires financial investment, it can drive disparities in educational achievement. Aly Murray experienced those disparities firsthand. Raised by a single mother who immigrated to the US from Cuba, Murray grew up as a low-income student and later recognized how transformative access to a human tutor could have been. 


Shift-Left Strategies for Cloud-Native and Serverless Architectures

The whole architectural framework of shift-left security depends on moving critical security practices earlier in the development lifecycle. Incorporating security in the development lifecycle should not be an afterthought. Within this context, teams are empowered to identify and eliminate risks at design time, build time, and during CI/CD — not after. These modern workloads are highly dynamic and interconnected, and a single mishap can trickle down across the entire environment. ... Serverless Functions can introduce issues if they run with excessive privileges. This can be addressed by simply embedding permissions checks early in the development lifecycle. A baseline of minimum required identity and access management (IAM) privileges should be enforced to keep development tight. Wildcards or broad permissions should be leveraged in this context. Also, it makes sense to use runtime permission boundary generation — otherwise, functions can be compromised without appropriate safeguards. ... In modern-day cloud environments, it is crucial that observability is considered a major priority. Shifting left within the context of observability means logs, metrics, traces, and alerts are integrated directly into the application from day one. AWS CloudWatch or DataDog metrics can be integrated into the application code so that developers can keep an eye on the critical behaviors of the application. 


Agentic AI and Autonomous Agents: The Dawn of Smarter Machines

At their core, agentic AI and autonomous agents rely on a few powerhouse components: planning, reasoning, acting, and tool integration. Planning is the blueprint phase the AI breaks a goal into subtasks, like mapping out a road trip with stops for gas and sights. Reasoning kicks in next, where it evaluates options using logic, past data, or even ethical guidelines (more on that later). Acting is the execution: interfacing with the real world via APIs, databases, or even physical robots. And tool integration?  ... Diving deeper, it’s worth comparing agentic AI to other paradigms to see why it’s a game-changer. Standalone LLMs, like basic GPT models, are fantastic for generating text but falter on execution — they can’t “do” things without external help. Agentic systems bridge that by embedding action loops. Multi-agent setups take it further: Imagine a team of specialized agents collaborating, one for research, another for analysis, like a virtual task force. ... Looking ahead, the future of agentic AI feels electric yet cautious. By 2030, I predict multi-agent collaborations becoming standard, with advancements in human-in-the-loop designs to mitigate ethics pitfalls — like ensuring transparency in decision-making or preventing job displacement. OpenAI’s push for standardized frameworks addresses this, but we must grapple with questions: Who owns the data agents learn from? How do we audit autonomous actions? 


Operationalizing Data Strategy with OKRs: From Vision to Execution

For any business, some of the most critical data-driven initiatives and priorities include risk mitigation, revenue growth, and customer experience. To drive more effectiveness and accuracy in such business functions, finding ways to blend the technical output and performance data with tangible business outcomes is important. You must also proactively assess the shortcomings and errors in your data strategy to identify and correct any misaligned priorities. ... OKRs can empower data teams to leverage analytics and data sources to deliver highly actionable, timely insights. Set measurable and time-bound objectives to ensure focus and drive tangible progress toward your goals by leveraging an OKR platform, creating visually appealing dashboards, and assigning accountability to employees. ... If your high-level vision is “to become a data-driven organization,” the most effective way to work toward it is to break it into specific and measurable objectives. More importantly, consider segmenting your core strategy into multiple use cases, like operations optimization, customer analytics, and regulatory compliance. With these easily trackable segments, improve your focus and enable your teams to deliver incremental value. ... By tying OKRs with processes like governance and quality, you can ensure that they become measurable and visible priorities, causing fewer incidents and building confidence in analytics-based projects and processes.


This tiny chip could change the future of quantum computing

At the heart of the technology are microwave-frequency vibrations that oscillate billions of times per second. These vibrations allow the chip to manipulate laser light with remarkable precision. By directly controlling the phase of a laser beam, the device can generate new laser frequencies that are both stable and efficient. This level of control is a key requirement not only for quantum computing, but also for emerging fields such as quantum sensing and quantum networking. ... The new device generates laser frequency shifts through efficient phase modulation while using about 80 times less microwave power than many existing commercial modulators. Lower power consumption means less heat, which allows more channels to be packed closely together, even onto a single chip. Taken together, these advantages transform the chip into a scalable system capable of coordinating the precise interactions atoms need to perform quantum calculations. ... The researchers are now working on fully integrated photonic circuits that combine frequency generation, filtering, and pulse shaping on a single chip. This effort moves the field closer to a complete, operational quantum photonic platform. Next, the team plans to partner with quantum computing companies to test these chips inside advanced trapped-ion and trapped-neutral-atom quantum computers.


The 5-Step Framework to Ensure AI Actually Frees Your Time Instead of Creating More Work

Success with AI isn’t measured by the number of automations you have deployed. True AI leverage is measured by the number of high-value tasks that can be executed without oversight from the business owner. ... Map what matters most — It’s critical to focus your energy on where it matters the most. Look through your processes to identify bottlenecks and repetitive decisions or tasks that don’t need your input. ... Design roles before rules — Figure out where you need human ownership in your processes. These will be activities that require traits like empathy, creative thinking and high-level strategy. Once the roles are established, you can build automation that supports those roles. ... Document before you delegate — Both humans and machines need clear direction. Be sure to document any processes, procedures, and SOPs before delegating or automating them. ... Automate boring and elevate brilliant — Your primary goal with automation is to free up your time for creating, strategy and building relationships. Of course, the reality is that not everything should be automated. ... Measure output, not inputs — Too many entrepreneurs spend their time focused on what their team and AI agents are doing and not what they are achieving. Intentional automation requires placing your focus on outputs to ensure the processes you have in place are working effectively, or where they can be improved. 


The next big IT security battle is all about privileged access

As the space matures, privileged access workflows will increasingly depend on adaptive authentication policies that validate identity and device posture in real time. Vendors that offer flexible passwordless frameworks and integrations with existing IAM and PAM systems will see increased market traction. This will mark a shift in the promised end of passwords, eliminating one of the most exploited attack vectors in privilege abuse and account takeovers. ... Instead of relying solely on human auditors or predefined rules, IAM/PAM solutions will use generative AI to summarize risky session activities, detect lateral movement indicators, and suggest remediations in real time. AI-assisted security will make privileged access oversight continuous and contextual, helping enterprises detect insider threats and compromised accounts faster than ever before. This will also move the industry toward autonomous access governance. ... Compromised privileged credentials will remain the single most direct path to data loss, and a sharp rise in targeted breaches, ransomware campaigns, and supply-chain intrusions involving administrative accounts will elevate IAM/PAM to a board-level concern in 2026. Enterprises will accelerate investments in vendor privileged access tools to mitigate risk from contractors, managed service providers, and external support staff.


Mentorship and Diversity: Shaping the Next Generation of Cyber Experts

For those considering a career in cybersecurity, Voight's advice is both practical and inspiring: follow your passion and embrace the industry's constant evolution. Whether you're starting in security operations or exploring niche areas like architecture and engineering, the key is to stay curious and committed to learning. As artificial intelligence and automation reshape the field, Voight remains optimistic, assuring that human expertise will always be essential, encouraging aspiring professionals to dive into a field brimming with opportunity, innovation, and the chance to make a meaningful impact. ... Cybersecurity is fascinating and offers many paths of entry. You don't necessarily need a specific academic program to get involved. The biggest piece is having a passion for it. The more you love learning about this industry, the better it will be for you in the long run. It's something you do because you love it. ... Sometimes, it's the people and teams you work with that make the job exciting. You want to be doing something new and exciting, something you can embrace and contribute to. Keep an open mind to all the different paths. There isn't one direct path, and not everyone will become a Chief Information Security Officer (CISO). Being a CISO may not be the role everyone imagines it to be when considering the responsibilities involved.

Daily Tech Digest - February 11, 2025


Quote for the day:

"Your worth consists in what you are and not in what you have." -- Thomas Edison


Protecting Your Software Supply Chain: Assessing the Risks Before Deployment

Given the vast number of third-party components used in modern IT, it's unrealistic to scrutinize every software package equally. Instead, security teams should prioritize their efforts based on business impact and attack surface exposure. High-privilege applications that frequently communicate with external services should undergo product security testing, while lower-risk applications can be assessed through automated or less resource-intensive methods. Whether done before deployment or as a retrospective analysis, a structured approach to PST ensures that organizations focus on securing the most critical assets first while maintaining overall system integrity. ... While Product Security Testing will never prevent a breach of a third party out of your control, it is necessary to allow organizations to make informed decisions about their defensive posture and response strategy. Many organizations follow a standard process of identifying a need, selecting a product, and deploying it without a deep security evaluation. This lack of scrutiny can leave them scrambling to determine the impact when a supply chain attack occurs. By incorporating PST into the decision-making process, security teams gain critical documentation, including dependency mapping, threat models, and specific mitigations tailored to the technology in use. 


Google’s latest genAI shift is a reminder to IT leaders — never trust vendor policy

Entities out there doing things you don’t like are always going to be able to get generative AI (genAI) services and tools from somebody. You think large terrorist cells can’t use their money to pay somebody to craft LLMs for them? Even the most powerful enterprises can’t stop it from happening. But, that may not be the point. Walmart, ExxonMobil, Amazon, Chase, Hilton, Pfizer and Toyota and the rest of those heavy-hitters merely want to pick and choose where their monies are spent. Big enterprises can’t stop AI from being used to do things they don’t like, but they can make sure none of it is being funded with their money. If they add a clause to every RFP that they will only work with model-makers that agree to not do X, Y, or Z, that will get a lot of attention. The contract would have to be realistic, though. It might say, for instance, “If the model-maker later chooses to accept payments for the above-described prohibited acts, they must reimburse all of the dollars we have already paid and must also give us 18 months notice so that we can replace the vendor with a company that will respect the terms of our contracts.” From the perspective of Google, along with Microsoft, OpenAI, IBM, AWS and others, the idea is to take enterprise dollars on top of government contracts. 


Is Fine-Tuning or Prompt Engineering the Right Approach for AI?

It’s not just about having access to GPUs — it’s about getting the most out of proprietary data with new tools that make fine-tuning easier. Here’s why fine-tuning is gaining traction:Better results with proprietary data: Fine-tuning allows businesses to train models on their own data, making the AI much more accurate and relevant to their specific tasks. This leads to better outcomes and real business value. Easier than ever before: Tools like Hugging Face’s Open Source libraries, PyTorch and TensorFlow, along with cloud services, have made fine-tuning more accessible. These frameworks simplify the process, even for teams without deep AI expertise. Improved infrastructure: The rising availability of powerful GPUs and cloud-based solutions has made it much easier to set up and run fine-tuning at scale. While fine-tuning opens the door to more customized AI, it does require careful planning and the right infrastructure to succeed. ... As enterprises accelerate their AI adoption, choosing between prompt engineering and fine-tuning will have a significant impact on their success. While prompt engineering provides a quick, cost-effective solution for general tasks, fine-tuning unlocks the full potential of AI, enabling superior performance on proprietary data.


Shifting left without slowing down

On the one hand, automation enabled by GenAI tools in software development is driving unprecedented developer productivity, further emphasizing the gap created by manual application security controls, like security reviews or threat modeling. But in parallel, recent advancements in code understanding enabled by these technologies, together with programmatic policy-as-code security policies, enable a giant leap in the value security automation can bring. ... The first step is recognizing security as a shared responsibility across the organization, not just a specialized function. Equipping teams with automated tools and clear processes helps integrate security into everyday workflows. Establishing measurable goals and metrics to track progress can also provide direction and accountability. Building cross-functional collaboration between security and development teams sets the foundation for long-term success. ... A common pitfall is treating security as an afterthought, leading to disruptions that strain teams and delay releases. Conversely, overburdening developers with security responsibilities without proper support can lead to frustration and neglect of critical tasks. Failure to adopt automation or align security goals with development objectives often results in inefficiency and poor outcomes. 


How To Approach API Security Amid Increasing Automated Attack Sophistication

We’ve now gone from ‘dumb’ attacks—for example, web-based attacks focused on extracting data from third parties and on a specific or single vulnerability—to ‘smart’ AI-driven attacks often involving picking an actual target, resulting in a more focused attack. Going after a particular organization, perhaps a large organization or even a nation-state, instead of looking for vulnerable people is a significant shift. The sophistication is increasing as attackers manipulate request payloads to trick the backend system into an action. ... Another element of API security is being aware of sensitive data. Personal Identifiable Information (PII) is moving through APIs constantly and is vulnerable to theft or data exfiltration. Organizations do not often pay attention to vulnerabilities. Still, they pay attention when the result is damage to their organization through leaked PII, stolen finances, or brand reputation. ... The security teams know the network systems and the infrastructure well but don't understand the application behaviors. The DevOps team tends to own the applications but doesn’t see anything in production. This split boundary in most organizations makes it ripe for exploitation. Many data exfiltration cases fall in this no man’s land since an authenticated user executes most incidents.


Top 5 ways attackers use generative AI to exploit your systems

Gen AI tools help criminals pull together different sources of data to enrich their campaigns — whether this is group social profiling, or targeted information gleaned from social media. “AI can be used to quickly learn what types of emails are being rejected or opened, and in turn modify its approach to increase phishing success rate,” Mindgard’s Garraghan explains. ... The traditionally difficult task of analyzing systems for vulnerabilities and developing exploits can be simplified through use of gen AI technologies. “Instead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,” Mingard’s Garraghan says. ... “This sharp decrease strongly indicates that a major technological advancement — likely GenAI — is enabling threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest writes. ... Check Point Research explains: “While ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones — individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.”


Why firewalls and VPNs give you a false sense of security

VPNs and firewalls play a crucial role in extending networks, but they also come with risks. By connecting more users, devices, locations, and clouds, they inadvertently expand the attack surface with public IP addresses. This expansion allows users to work remotely from anywhere with an internet connection, further stretching the network’s reach. Moreover, the rise of IoT devices has led to a surge in Wi-Fi access points within this extended network. Even seemingly innocuous devices like Wi-Fi-connected espresso machines, meant for a quick post-lunch pick-me-up, contribute to the proliferation of new attack vectors that cybercriminals can exploit. ... More doesn’t mean better when it comes to firewalls and VPNs. Expanding a perimeter-based security architecture rooted in firewalls and VPNs means more deployments, more overhead costs, and more time wasted for IT teams – but less security and less peace of mind. Pain also comes in the form of degraded user experience and satisfaction with VPN technology for the entire organization due to backhauling traffic. Other challenges like the cost and complexity of patch management, security updates, software upgrades, and constantly refreshing aging equipment as an organization grows are enough to exhaust even the largest and most efficient IT teams.


Building Trust in AI: Security and Risks in Highly Regulated Industries

AI hallucinations have emerged as a critical problem, with systems generating plausible but incorrect information - for instance, AI fabricated software dependencies, such as PyTorture, leading to potential security risks. Hackers could exploit these hallucinations by creating malicious components masquerading as real ones. In another case, an AI libelously fabricated an embezzlement claim, resulting in legal action - marking the first time AI was sued for libel. Security remains a pressing concern, particularly with plugins and software supply chains. A ChatGPT plugin once exposed sensitive data due to a flaw in its OAuth mechanism, and incidents like PyTorch’s vulnerable release over Christmas demonstrate the risks of system exploitation. Supply chain vulnerabilities affect all technologies, while AI-specific threats like prompt injection allow attackers to manipulate outputs or access sensitive prompts, as seen in Google Gemini. ... Organizations can enhance their security strategies by utilizing frameworks like Google’s Secure AI Framework (SAIF). These frameworks highlight security principles, including access control, detection and response systems, defense mechanisms, and risk-aware processes tailored to meet specific business needs.


When LLMs become influencers

Our ability to influence LLMs is seriously circumscribed. Perhaps if you’re the owner of the LLM and associated tool, you can exert outsized influence on its output. For example, AWS should be able to train Amazon Q to answer questions, etc., related to AWS services. There’s an open question as to whether Q would be “biased” toward AWS services, but that’s almost a secondary concern. Maybe it steers a developer toward Amazon ElastiCache and away from Redis, simply by virtue of having more and better documentation and information to offer a developer. The primary concern is ensuring these tools have enough good training data so they don’t lead developers astray. ... Well, one option is simply to publish benchmarks. The LLM vendors will ultimately have to improve their output or developers will turn to other tools that consistently yield better results. If you’re an open source project, commercial vendor, or someone else that increasingly relies on LLMs as knowledge intermediaries, you should regularly publish results that showcase those LLMs that do well and those that don’t. Benchmarking can help move the industry forward. By extension, if you’re a developer who increasingly relies on coding assistants like GitHub Copilot or Amazon Q, be vocal about your experiences, both positive and negative. 


Deepfakes: How Deep Can They Go?

Metaphorically, spotting deepfakes is like playing the world’s most challenging game of “spot the difference.” The fakes have become so sophisticated that the inconsistencies are often nearly invisible, especially to the untrained eye. It requires constant vigilance and the ability to question the authenticity of audiovisual content, even when it looks or sounds completely convincing. Recognizing threats and taking decisive actions are crucial for mitigating the effects of an attack. Establishing well-defined policies, reporting channels, and response workflows in advance is imperative. Think of it like a citywide defense system responding to incoming missiles. Early warning radars (monitoring) are necessary to detect the threat; anti-missile batteries (AI scanning) are needed to neutralize it; and emergency services (incident response) are essential to quickly handle any impacts. Each layer works in concert to mitigate harm. ... If a deepfake attack succeeds, organizations should immediately notify stakeholders of the fake content, issue corrective statements, and coordinate efforts to remove the offending content. They should also investigate the source, implement additional verification measures, and provide updates to rebuild trust and consider legal action. 


Daily Tech Digest - November 06, 2024

Enter the ‘Whisperverse’: How AI voice agents will guide us through our days

Within the next few years, an AI-powered voice will burrow into your ears and take up residence inside your head. It will do this by whispering guidance to you throughout your day, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot and prompting you with the name of a coworker you pass in the hall. It may even coach you as you hold conversations with friends and coworkers, or when out on dates, give you interesting things to say that make you seem smarter, funnier and more charming than you really are. ... Most of these devices will be deployed as AI-powered glasses because that form-factor gives the best vantage point for cameras to monitor our field of view, although camera-enabled earbuds will be available too. The other benefit of glasses is that they can be enhanced to display visual content, enabling the AI to provide silent assistance as text, images, and realistic immersive elements that are integrated spatially into our world. Also, sensored glasses and earbuds will allow us to respond silently to our AI assistants with simple head nod gestures of agreement or rejection, as we naturally do with other people. ... On the other hand, deploying intelligent systems that whisper in your ears as you go about your life could easily be abused as a dangerous form of targeted influence.


How to Optimize Last-Mile Delivery in the Age of AI

Technology is at the heart of all advancements in last-mile delivery. For instance, a typical map application gives the longitude and latitude of a building — its location — and a central access point. That isn't enough data when it comes to deliveries. In addition to how much time it takes to drive or walk from point A to point B, it's also essential for a driver to understand what to do at point B. At an apartment complex, for example, they need to know what units are in each building and on which level, whether to use a front, back, or side entrance, how to navigate restricted or gated areas, and how to access parking and loading docks or package lockers. Before GenAI, third-party vendors usually acquired this data, sold it to companies, and applied it to map applications and routing algorithms to provide delivery estimates and instructions. Now, companies can use GenAI in-house to optimize routes and create solutions to delivery obstacles. Suppose the data surrounding an apartment complex is ambiguous or unclear. For instance, there may be conflicting delivery instructions — one transporter used a drop-off area, and another used a front door. Or perhaps one customer was satisfied with their delivery, but another parcel delivered to the same location was damaged or stolen. 


Cloud providers make bank with genAI while projects fail

Poor data quality is a central factor contributing to project failures. As companies venture into more complex AI applications, the demand for tailored, high-quality data sets has exposed deficiencies in existing enterprise data. Although most enterprises understood that their data could have been better, they haven’t known how bad. For years, enterprises have been kicking the data can down the road, unwilling to fix it, while technical debt gathered. AI requires excellent, accurate data that many enterprises don’t have—at least, not without putting in a great deal of work. This is why many enterprises are giving up on generative AI. The data problems are too expensive to fix, and many CIOs who know what’s good for their careers don’t want to take it on. The intricacies in labeling, cleaning, and updating data to maintain its relevance for training models have become increasingly challenging, underscoring another layer of complexity that organizations must navigate. ... The disparity between the potential and practicality of generative AI projects is leading to cautious optimism and reevaluations of AI strategies. This pushes organizations to carefully assess the foundational elements necessary for AI success, including robust data governance and strategic planning—all things that enterprises are considering too expensive and too risky to deploy just to make AI work.


Why cybersecurity needs a better model for handling OSS vulnerabilities

Identifying vulnerabilities and navigating vulnerability databases is of course only part of the dependency problem; the real work lies in remediating identified vulnerabilities impacting systems and software. Aside from general bandwidth challenges and competing priorities among development teams, vulnerability management also suffers from challenges around remediation, such as the real potential that implementing changes and updates can potentially impact functionality or cause business disruptions. ... Reachability analysis “offers a significant reduction in remediation costs because it lowers the number of remediation activities by an average of 90.5% (with a range of approximately 76–94%), making it by far the most valuable single noise-reduction strategy available,” according to the Endor report. While the security industry can beat the secure-by-design drum until they’re blue in the face and try to shame organizations into sufficiently prioritizing security, the reality is that our best bet is having organizations focus on risks that actually matter. ... In a world of competing interests, with organizations rightfully focused on business priorities such as speed to market, feature velocity, revenue and more, having developers quit wasting time and focus on the 2% of vulnerabilities that truly present risks to their organizations would be monumental.


The new calling of CIOs: Be the moral arbiter of change

Unfortunately, establishing a strategy for democratizing innovation through gen AI is far from straightforward. Many factors, including governance, security, ethics, and funding, are important, and it’s hard to establish ground rules. ... What’s clear is tech-led innovation is no longer the sole preserve of the IT department. Fifteen years ago, IT was often a solution searching for a problem. CIOs bought technology systems, and the rest of the business was expected to put them to good use. Today, CIOs and their teams speak with their peers about their key challenges and suggest potential solutions. But gen AI, like cloud computing before it, has also made it much easier for users to source digital solutions independently of the IT team. That high level of democratization doesn’t come without risks, and that’s where CIOs, as the guardians of enterprise technology, play a crucial role. IT leaders understand the pain points around governance, implementation, and security. Their awareness means responsibility for AI, and other emerging technologies have become part of a digital leader’s ever-widening role, says Rahul Todkar, head of data and AI at travel specialist Tripadvisor.


5 Strategies For Becoming A Purpose-Driven Leader

Purpose-driven leaders are fueled by more than sheer ambition; they are driven by a commitment to make a meaningful impact. They inspire those around them to pursue a shared purpose each day. This approach is especially powerful in today’s workforce, where 70% of employees say their sense of purpose is closely tied to their work, according to a recent report by McKinsey. Becoming a purpose-driven leader requires clarity, strategic foresight, and a commitment to values that go beyond the bottom line. ... Aligning your values with your leadership style and organizational goals is essential for authentic leadership. “Once you have a firm grasp of your personal values, you can align them with your leadership style and organizational goals. This alignment is crucial for maintaining authenticity and ensuring that your decisions reflect your deeper sense of purpose,” Blackburn explains. ... Purpose-driven leaders embody the values and behaviors they wish to see reflected in their teams. Whether through ethical decision-making, transparency, or resilience in the face of challenges, purpose-driven leaders set the tone for how others in the organization should act. By aligning words with actions, leaders build credibility and trust, which are the foundations of sustainable success.


Chaos Engineering: The key to building resilient systems for seamless operations

The underlying philosophy of Chaos Engineering is to encourage building systems that are resilient to failures. This means incorporating redundancy into system pathways, so that the failure of one path does not disrupt the entire service. Additionally, self-healing mechanisms can be developed such as automated systems that detect and respond to failures without the need for human intervention. These measures help ensure that systems can recover quickly from failures, reducing the likelihood of long-lasting disruptions. To effectively implement Chaos Engineering and avoid incidents like the payments outage, organisations can start by formulating hypotheses about potential system weaknesses and failure points. They can then design chaos experiments that safely simulate these failures in controlled environments. Tools such as Chaos Monkey, Gremlin, or Litmus can automate the process of failure injection and monitoring, enabling engineers to observe system behaviour in response to simulated disruptions. By collecting and analysing data from these experiments, organisations can learn from the failures and use these insights to improve system resilience. This process should be iterative, and organisations should continuously run new experiments and refine their systems based on the results.


Shifting left with telemetry pipelines: The future of data tiering at petabyte scale

In the context of observability and security, shifting left means accomplishing the analysis, transformation, and routing of logs, metrics, traces, and events very far upstream, extremely early in their usage lifecycle — a very different approach in comparison to the traditional “centralize then analyze” method. By integrating these processes earlier, teams can not only drastically reduce costs for otherwise prohibitive data volumes, but can even detect anomalies, performance issues, and potential security threats much quicker, before they become major problems in production. The rise of microservices and Kubernetes architectures has specifically accelerated this need, as the complexity and distributed nature of cloud-native applications demand more granular and real-time insights, and each localized data set is distributed when compared to the monoliths of the past. ... As telemetry data continues to grow at an exponential rate, enterprises face the challenge of managing costs without compromising on the insights they need in real time, or the requirement of data retention for audit, compliance, or forensic security investigations. This is where data tiering comes in. Data tiering is a strategy that segments data into different levels based on its value and use case, enabling organizations to optimize both cost and performance.


A Transformative Journey: Powering the Future with Data, AI, and Collaboration

The advancements in industrial data platforms and contextualization have been nothing short of remarkable. By making sense of data from different systems—whether through 3D models, images, or engineering diagrams—Cognite is enabling companies to build a powerful industrial knowledge graph, which can be used by AI to solve complex problems faster and more effectively than ever before. This new era of human-centric AI is not about replacing humans but enhancing their capabilities, giving them the tools to make better decisions, faster. Without the buy in from the people who will be affected by any new innovation or technology the probability of success is unlikely. Engaging these individuals early on in the process to solve the issues they find challenging, mundane, or highly repetitive, is critical to driving adoption and creating internal champions to further catalyze adoption. In a fascinating case study shared by one of Cognite’s partners, we learned about the transformative potential of data and AI in the chemical manufacturing sector. A plant operator described how the implementation of mobile devices powered by Cognite’s platform has drastically improved operational efficiency. 


Four Steps to Balance Agility and Security in DevSecOps

Tools like OWASP ZAP and Burp Suite can be integrated into continuous integration/continuous delivery (CI/CD) pipelines to automate security testing. For example, LinkedIn uses Ansible to automate its infrastructure provisioning, which reduces deployment times by 75%. By automating security checks, LinkedIn ensures that its rapid delivery processes remain secure. Automating security not only enhances speed but also improves the overall quality of software by catching issues before they reach production. Automated tools can perform static code analysis, vulnerability scanning and penetration testing without disrupting the development cycle, helping teams deploy secure software faster. ... As organizations look to the future, artificial intelligence (AI) and machine learning (ML) will play a crucial role in enhancing both security and agility. AI-driven security tools can predict potential vulnerabilities, automate incident response and even self-heal systems without human intervention. This not only improves security but also reduces the time spent on manual security reviews. AI-powered tools can analyze massive amounts of data, identifying patterns and potential threats that human teams may overlook. This can reduce downtime and the risk of cyberattacks, ultimately allowing organizations to deploy faster and more securely.



Quote for the day:

"If you are truly a leader, you will help others to not just see themselves as they are, but also what they can become." -- David P. Schloss

Daily Tech Digest - September 04, 2024

What is HTTP/3? The next-generation web protocol

HTTPS will still be used as a mechanism for establishing secure connections, but traffic will be encrypted at the HTTP/3 level. Another way to say it is that TLS will be integrated into the network protocol instead of working alongside it. So, encryption will be moved into the transport layer and out of the app layer. This means more security by default—even the headers in HTTP/3 are encrypted—but there is a corresponding cost in CPU load. Overall, the idea is that communication will be faster due to improvements in how encryption is negotiated, and it will be simpler because it will be built-in at a lower level, avoiding the problems that arise from a diversity of implementations. ... In TCP, that continuity isn’t possible because the protocol only understands the IP address and port number. If either of those changes—as when you walk from one network to another while holding a mobile device—an entirely new connection must be established. This reconnection leads to a predictable performance degradation. The QUIC protocol introduces connection IDs or CIDs. For security, these are actually CID sets negotiated by the server and client. 


6 things hackers know that they don’t want security pros to know that they know

It’s not a coincidence that many attacks happen at the most challenging of times. Hackers really do increase their attacks on weekends and holidays when security teams are lean. And they’re more likely to strike right before lunchtime and end-of-day, when workers are rushing and consequently less attentive to red flags indicating a phishing attack or fraudulent activity. “Hackers typically deploy their attacks during those times because they’re less likely to be noticed,” says Melissa DeOrio, global threat intelligence lead at S-RM, a global intelligence and cybersecurity consultancy. ... Threat actors actively engage in open-source intelligence (OSINT) gathering, looking for information they can use to devise attacks, Carruthers says. It’s not surprising that hackers look for news about transformative events such as big layoffs, mergers and the like, she says. But CISOs, their teams and other executives may be surprised to learn that hackers also look for news about seemingly innocuous events such as technology implementations, new partnerships, hiring sprees, and executive schedules that could reveal when they’re out of the office.


Take the ‘Shift Left’ Approach a Step Further by ‘Starting Left’

This makes it vital to guarantee code quality and security from the start so that nothing slips through the cracks. Shift left accounts for this. It minimizes risks of bugs and vulnerabilities by introducing code testing and analysis earlier in the SLDC, catching problems before they mount and become trickier to solve or even find. Advancing testing activities earlier puts DevOps teams in a position to deliver superior-quality software to customers with greater frequency. As a practice, “shift left” requires a lot more vigilance in today’s security landscape. But most development teams don’t have the mental (or physical) bandwidth to do it properly — even though it should be an intrinsic part of code development strategy. In fact, the Linux Foundation revealed in a study recently that almost one-third of developers aren’t familiar with secure software development practices. “Shifting left” — performing analysis and code reviews earlier in the development process — is a popular mindset for creating better software. What the mindset should be, though, is to “start left,” not just impose the burden later on in the SDLC for developers. ... This mindset of “start left” focuses not only on an approach that values testing early and often, but also on using the best tools to do so. 


ONCD Unveils BGP Security Road Map Amid Rising Threats

The guidance comes amid an intensified threat landscape for BGP, which serves as the backbone of global internet traffic routing. BGP is a foundational yet vulnerable protocol, developed at a time when many of today's cybersecurity risks did not exist. Coker said the ONCD is committed to covering at least 60% of the federal government's IP space by registration service agreements "by the end of this calendar year." His office recently led an effort to develop a federal RSA template that federal agencies can use to facilitate their adoption of Resource Public Key Infrastructure, which can be used to mitigate BGP vulnerabilities. ... The ONCD report underscores how BGP "does not provide adequate security and resilience features" and lacks critical security capabilities, including the ability to validate the authority of remote networks to originate route announcements and to ensure the authenticity and integrity of routing information. The guidance tasks network operators with developing and periodically updating cybersecurity risk management plans that explicitly address internet routing security and resilience. It also instructs operators to identify all information systems and services internal to the organization that require internet access and assess the criticality of maintaining those routes for each address.


Efficient DevSecOps Workflows With a Little Help From AI

When it comes to software development, AI offers lots of possibilities to enhance workflows at every stage—from splitting teams into specialized roles such as development, operations, and security to facilitating typical steps like planning, managing, coding, testing, documentation, and review. AI-powered code suggestions and generation capabilities can automate tasks like autocompletion and identification of missing dependencies, making coding more efficient. Additionally, AI can provide code explanations, summarizing algorithms, suggesting performance improvements, and refactoring long code into object-oriented patterns or different languages. ... Instead of manually sifting through job logs, AI can analyze them and provide actionable insights, even suggesting fixes. By refining prompts and engaging in conversations with the AI, developers can quickly diagnose and resolve issues, even receiving tips for optimization. Security is crucial, so sensitive data like passwords and credentials must be filtered before analysis. A well-crafted prompt can instruct the AI to explain the root cause in a way any software engineer can understand, accelerating troubleshooting. This approach can significantly improve developer efficiency.


PricewaterhouseCoopers’ new CAIO – workers need to know their role with AI

“AI is becoming a natural part of everything we make and do. We’re moving past the AI exploration cycle, where managing AI is no longer just about tech, it is about helping companies solve big, important and meaningful problems that also drive a lot of economic value. “But the only way we can get there is by bringing AI into an organization’s business strategy, capability systems, products and services, ways of working and through your people. AI is more than just a tool — it can be viewed as a member of the team, embedding into the end-to-end value chain. The more AI becomes naturally embedded and intrinsic to an organization, the more it will help both the workforce and business be more productive and deliver better value. “In addition, we will see new products and services that are fully AI-powered come into the market — and those are going to be key drivers of revenue and growth.” ... You need to consider the bigger picture, understanding how AI is becoming integrated in all aspects of your organization. That means having your RAI leader working closely with your company’s CAIO (or equivalent) to understand changes in your operating model, business processes, products and services.


What Is Active Metadata and Why Does It Matter?

Active metadata’s ability to update automatically whenever the data it describes changes now extends beyond the data profile itself to enhance the management of data access, classification, and quality. Passive metadata’s static nature limits its use to data discovery, but the dynamic nature of active metadata delivers real-time insights into the data’s lineage to help automate data governance: Get a 360-degree view of data - Active metadata’s ability to auto-update ensures that metadata delivers complete and up-to-date descriptions of the data’s lineage, context, and quality. Companies can tell at a glance whether the data is being used effectively, appropriately, and in compliance with applicable regulations. Monitor data quality in real time - Automatic metadata updates improve data quality management by providing up-to-the-minute metrics on data completeness, accuracy, and consistency. This allows organizations to identify and respond to potential data problems before they affect the business. Patch potential governance holes - Active metadata allows data governance rules to be enforced automatically to safeguard access to the data, ensure it’s appropriately classified, and confirm it meets all data retention requirements. 


How to Get IT and Security Teams to Work Together Effectively

Successful collaboration requires a sense of shared mission, Preuss says. Transparency is crucial. "Leverage technology and automation to effectively share information and challenges across both teams," she advises. Building and practicing trust and communication in an environment that's outside the norm is also essential. One way to do so is by conducting joint business resilience drills. "Whether a cyber war game or an environmental crisis [exercise], resilience drills are one way to test the collaboration between teams before an event occurs." ... When it comes to cross-team collaboration, Scott says it's important for members to understand their communication style as well as the communication styles of the people they work with. "At Immuta, we do this through a DiSC assessment, which each employee is invited to complete upon joining the company." To build an overall sense of cooperation and teamwork, Jeff Orr, director of research, digital technology at technology research and advisory firm ISG, suggests launching an exercise simulation in which both teams are required to collaborate in order to succeed. 


Protecting national interests: Balancing cybersecurity and operational realities

A significant challenge we face today is safeguarding the information space against misinformation, disinformation, manipulation and deceptive content. Whether this is at the behest of nation-states, or their supporters, it can be immensely destabilising and disruptive. We must find a way to tackle this challenge, but this should not just focus on the responsibilities held by social media platforms, but also on how we can detect targeted misinformation, counter those narratives and block the sources. Technology companies have a key role in taking down content that is obviously malicious, but we need the processes to respond in hours, rather than days and weeks. More generally, infrastructure used to launch attacks can be spun up more quickly than ever and attacks manifest at speed. This requires the government to work more closely with major technology and telecommunication providers so we can block and counter these threats – and that demands information sharing mechanisms and legal frameworks which enable this. Investigating and countering modern transnational cybercrime demands very different approaches, and of course AI will undoubtedly play a big part in this, but sadly both in attack and defence.


How leading CIOs cultivate business-centric IT

With digital strategy and technology as the brains behind most business functions and operating models, IT organizations are determined to inject more business-centricity into their employee DNA. IT leaders have been burnishing their business acumen and embracing a non-technical remit for some time. Now, there’s a growing desire to infuse that mentality throughout the greater IT organization, stretching beyond basic business-IT alignment to creating a collaborative force hyper-fixated on channeling innovation to advance enterprise business goals. “IT is no longer the group in the rear with the gear,” says Sabina Ewing, senior vice president of business and technology services and CIO at Abbott Laboratories. ... While those with robust experience and expertise in highly technical areas such as cloud architecture or cybersecurity are still highly coveted, IT organizations like Duke Health, ServiceNow, and others are also seeking a very different type of persona. Zoetis, a leading animal health care company, casts a wider net when seeking tech and digital talent, focusing on those who are collaborative, passionate about making a difference, and adaptable to change. Candidates should also have a strong understanding of technology application, says CIO Keith Sarbaugh.



Quote for the day:

''When someone tells me no, it doesn't mean I can't do it, it simply means I can't do it with them.'' -- Karen E. Quinones Miller