Daily Tech Digest - December 22, 2024

3 Steps To Include AI In Your Future Strategic Plans

AI is complex and multifaceted, so adopting it is not as simple as replacing legacy systems with new technology. Leaders would need to dig deeper to uncover barriers and opportunities. This can involve inviting external experts to discuss AI's benefits and challenges, hosting workshops where team members can explore different case studies, or creating internal discussion groups focused on various aspects of AI technology and potential barriers to adoption. ... A strong strategic plan should clearly link prospective investments to the organization's purpose and mission. For example, if customer centricity is central to the mission, any investment in new technology should directly connect to improving customer outcomes. ... A strategy plan should not only outline planned AI initiatives but also provide a clear roadmap for implementation. Given that AI is still evolving, it's crucial not to create a roadmap in isolation from ever-changing business challenges, market dynamics, or technological advancements. ... In this context, an AI strategy roadmap should be emergent— meaning it should be grounded in key strategic intentions while also being flexible enough to adapt to unforeseen events or black swan occurrences that necessitate rethinking and adjustments.


Can Pure Scrum Actually Work?

“Pure Scrum,” described in the Scrum Guide, is an idiosyncratic framework that helps create customer value in a complex environment. However, five main issues are challenging its general corporate application:Pure Scrum focuses on delivery: How can we avoid running in the wrong direction by building things that do not solve our customers’ problems? Pure Scrum ignores product discovery in particular and product management in general. If you think of the Double Diamond, to use a popular picture, Scrum is focused on the right side; see above. Pure Scrum is designed around one team focused on supporting one product or service. Pure Scrum does not address portfolio management. It is not designed to align and manage multiple product initiatives or projects to achieve strategic business objectives. Pure Scrum is based on far-reaching team autonomy: The Product Owner decides what to build, the Developers decide how to build it, and the Scrum team self-manages. ... At its core, pure Scrum is less a project management framework and more a reflection of an organization’s fundamental approach to creating value. It requires a profound shift from seeing work as a series of prescribed steps to viewing it as a continuous journey of discovery and adaptation. 


The Rise of Agentic AI: How Hyper-Automation is Reshaping Cybersecurity and the Workforce

As AI advances, concerns about job displacement grow louder. For years, organizations have reassured employees that AI will “enhance, not replace” human roles. Smith offered a more nuanced perspective: “AI will replace tasks, not people—at least in the near term. Human oversight remains critical because we still don’t fully understand AI behavior.” In cybersecurity, AI acts as a force multiplier, streamlining tedious tasks like data analysis and incident documentation while enabling humans to focus on strategic decisions. This collaboration allows professionals to do more with less, amplifying productivity without eliminating the need for human expertise. However, Smith acknowledged long-term challenges. ... The rise of agentic AI marks a transformative moment for cybersecurity and the workforce. As organizations move beyond static workflows and embrace dynamic, autonomous systems, they gain the ability to respond to threats faster and more efficiently than ever before. However, this evolution demands a strategic approach—one that balances automation with human oversight, strengthens defenses against AI-driven attacks, and prepares for the societal shifts AI will bring.


If ChatGPT produces AI-generated code for your app, who does it really belong to?

From a contractual point of view, Santalesa contends that most companies producing AI-generated code will, "as with all of their other IP, deem their provided materials -- including AI-generated code -- as their property." OpenAI (the company behind ChatGPT) does not claim ownership of generated content. According to their terms of service, "OpenAI hereby assigns to you all its right, title, and interest in and to Output." Clearly, though, if you're creating an application that uses code written by an AI, you'll need to carefully investigate who owns (or claims to own) what. For a view of code ownership outside the US, ZDNET turned to Robert Piasentin, a Vancouver-based partner in the Technology Group at McMillan LLP, a Canadian business law firm. He says that ownership, as it pertains to AI-generated works, is still an "unsettled area of the law." ... Piasenten says there may already be some UK case law precedent, based not on AI but on video game litigation. A case before the High Court (roughly analogous to the US Supreme Court) determined that images produced in a video game were the property of the game developer, not the player -- even though the player manipulated the game to produce a unique arrangement of game assets on the screen.


Supply Chain Risk Mitigation Must Be a Priority in 2025

Implementing impactful supply chain protections is far easier said than accomplished, due to the complexity, scale, and integration of modern supply chain ecosystems. While there isn't a silver bullet for eradicating threats entirely, prioritizing a targeted focus on effective supply chain risk management principles in 2025 is a critical place to start. It will require an optimal balance of rigorous supplier validation, purposeful data exposure, and meticulous preparation. ... As supply chain attacks accelerate, organizations must operate under the assumption that a breach isn't just possible — it's probable. An "assumption of breach" mindset shift will help drive more meticulous approaches to preparation via comprehensive supply chain incident response and risk mitigation. Preparation measures should begin with developing and regularly updating agile incident response processes that specifically cater to third-party and supply chain risks. For effectiveness, these processes will need to be well-documented and frequently practiced through realistic simulations and tabletop exercises. Such drills help identify potential gaps in the response strategy and ensure that all team members understand their roles and responsibilities during a crisis.


The End of Bureaucracy — How Leadership Must Evolve in the Age of Artificial Intelligence

AI doesn't just optimize — it transforms. It flattens hierarchies, demands transparency and dismantles traditional power structures. For those managers who thrive on gatekeeping, AI represents a fundamental threat, eliminating barriers they've spent careers building. Consider this: AI thrives on efficiency, speed and clarity. Tasks that once consumed hours of human effort — like vetting vendor contracts or managing customer service inquiries — are now handled instantly by AI systems. Employees can experiment with bold ideas without wading through endless committee approvals. But the true power of AI lies in decentralizing decision-making. By analyzing vast datasets, AI equips frontline employees with actionable insights that previously required executive oversight. This creates organizations that are faster, more agile and less dependent on gatekeepers. ... In an AI-first world, hierarchies will begin to collapse as real-time data eliminates the need for multiple layers of oversight, enabling faster and more efficient decision-making. At the same time, workflows will be reimagined as leaders take on the critical task of redesigning processes to seamlessly integrate AI, ensuring organizations can adapt quickly and effectively.


GAO report says DHS, other agencies need to up their game in AI risk assessment

The GAO said it is “recommending that DHS act quickly to update its guidance and template for AI risk assessments to address the remaining gaps identified in this report.” DHS, in turn, it said, “agreed with our recommendation and stated it plans to provide agencies with additional guidance that addresses gaps in the report including identifying potential risks and evaluating the level of risk.” ... AI, he said, “is being pushed out to businesses and consumers by organizations that profit from doing so, and assessing and addressing the potential harm it may cause has until recently been an afterthought. We are now seeing more focus on these potential negative effects, but efforts to contain them, let alone prevent them, will always be far behind the steamroller of new innovations in the AI realm.” Thomas Randall, research lead at Info-Tech Research Group, said, “it is interesting that the DHS had no assessments that evaluated the level of risk for AI use and implementation, but had largely identified mitigation strategies. What this may mean is the DHS is taking a precautionary approach in the time it was given to complete this assessment.” Some risks, he said, “may be identified as significant enough to warrant mitigation regardless of precise quantification of that risk. 


How CI/CD Helps Minimize Technical Debt in Software Projects

One of the foundational principles of CI/CD is the enforcement of automated testing. Automated tests, such as unit tests, integration tests, and end-to-end tests, ensure that code changes do not break existing functionality. By integrating testing into the CI pipeline, developers are alerted to issues immediately after they commit code. ... CI/CD pipelines facilitate incremental and iterative development by encouraging small, frequent code commits. Large, monolithic changes often introduce complexity and technical debt because they are harder to test, debug, and review effectively. ... Technical debt often arises from manual processes that are error-prone and time-consuming. CI/CD eliminates many of these inefficiencies by automating repetitive tasks, such as building, testing, and deploying applications. Automation ensures that these steps are performed consistently and accurately, reducing the risk of human error. ... Code reviews are a critical component of maintaining high-quality software. CI/CD tools enhance the code review process by providing automated feedback on every commit. This feedback loop fosters a culture of accountability and continuous improvement among developers.


Cost-conscious repatriation strategies

First, this is not a pushback on cloud technology as a concept; cloud works and has worked for the past 15 years. This repatriation trend highlights concerns about the unexpectedly high costs of cloud services, especially when enterprises feel they were promised lowered IT expenses during the earlier “cloud-only” revolutions. Leaders must adopt a more strategic perspective on their cloud architecture. It’s no longer just about lifting and shifting workloads into the cloud; it’s about effectively tailoring applications to leverage cloud-native capabilities—a lesson GEICO learned too late. A holistic approach to data management and technology strategies that aligns with an organization’s unique needs is the path to success and lower bills. Organizations are now exploring hybrid environments that blend public cloud capabilities with private infrastructure. A dual approach, which is nothing new, allows for greater data control, reduced storage and processing costs, and improved service reliability. Weekly noted that there are ways to manage capital expenditures in an operational expense model through on-premises solutions. On-prem systems tend to be more predictable and cost-effective over time.


Cyber Resilience: Adapting to Threats in the Cloud Era

Use cloud-native security solutions that offer automated threat detection, incident response, and monitoring. These technologies ought to be flexible enough to adjust to changes in the cloud environment and defend against new risks as they arise. ... Effective cyber resilience plans enable businesses to recover quickly from emergencies by reducing downtime and maintaining continuous service delivery. Businesses that put flexibility first can manage emergencies with few problems, which helps them keep the confidence and trust of their clients. Cyber resilience strongly emphasizes flexibility, enabling companies to address new risks in the ever-evolving digital environment. Businesses can lower financial losses and safeguard their reputation by concentrating on data protection and breach remediation. Finding and fixing common setup mistakes in cloud systems that could lead to security issues and data breaches requires using Cloud Security Posture Management (CSPM) tools. ... Because criminals frequently use these configuration errors to cause data breaches and security errors, it is essential to identify them. Organizations may monitor their cloud environments and ensure that settings follow security best practices and regulations by using CSPM solutions. 



Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett

Daily Tech Digest - December 21, 2024

The New Paradigm – The Rise of the Virtual Architect

We’re on the brink of a new paradigm in Enterprise Architecture—one where architects will have unprecedented access to knowledge, insights, and tools through what I call the Virtual Architect. The Virtual Architect isn’t limited to financial services. I’ve seen interest across industries like insurance and telecoms, where clients are eager to deploy such solutions. Why? Because it promises to provide accurate, real-time information, support colleagues, and even generate designs. Yes, you read that right—design generation is on the table. Naturally, this raises a big question: does this mean architects will be replaced? We’ll get to that in a moment. ... But here’s the catch: how do we ensure the designs generated by a Virtual Architect are accurate? The old saying applies—it’s only as good as the quality of the data and designs you feed in. That is where ongoing training and validation from architects remain crucial. So, will the Virtual Architect replace human architects? I don’t believe so, not in the near future. Designing systems is just one aspect of an architect’s role. Stakeholder engagement, strategic thinking, and soft skills are equally important—and these are areas where AI still falls short. For now, the Virtual Architect is an enhancement, not a replacement. 


IT/OT convergence propels zero-trust security efforts

Companies want flexibility in how end users and business applications access and interact with OT systems. ... Enterprises also want to extract data from OT systems, which requires network connectivity. For example, manufacturers can pull real-time data from their assembly lines so that specialized analytics applications can identify opportunities for efficiency and predict disruptions to production. While converging OT onto IT networks can drive innovation, it exposes OT systems to the threats that proliferate the digital world. Companies often need new security solutions to protect OT. EMA’s latest research report, “Zero Trust Networking: How Network Teams Support Cybersecurity,” revealed that IT/OT convergence drives 38% of enterprise zero-trust security strategies. ... IT/OT convergence leads enterprises to set different priorities for zero-trust solution requirements. When modernizing secure remote access solutions for zero trust, OT-focused companies have a stronger need for granular policy management capabilities. These companies are more likely to have a secure remote access solution that can cut off network access in response to anomalous behavior or changes in the state of a device. When implementing zero-trust network segmentation, OT-focused companies are more likely to seek a solution with dynamic and adaptive segmentation controls. 


Why Enterprises Still Grapple With Data Governance

“Even in highly regulated industries where the acceptance and understanding of the concept and value of governance more broadly are ingrained into the corporate culture, most data governance programs have progressed very little past an expensive [check] boxing exercise, one that has kept regulatory queries to a minimum but returned very little additional business value on the investment,” says Willis in an email interview. ... Why the disconnect? Data teams don’t feel they can spend time understanding stakeholders or even challenging business stakeholder needs. Though executive support is critical, data governance professionals are not making the most out of that support. One often unacknowledged problem is culture. “Unfortunately, in many organizations, the predominant attitude towards governance and risk management is that [they are] a burden of bureaucracy that slows innovation,” says Willis. “Data governance teams too frequently perpetuate that mindset, over-rotating on data controls and processes where the effort to execute is misaligned to the value they release.” One way to begin improving the effectiveness of data governance is to reassess the organization’s objectives and approach.


What Is Next-Generation Data Protection and Why Should Enterprise Tech Buyers Care?

Next-generation data protection was created to combat today’s most sophisticated and dangerous cyberattacks. It expands the purview of what is protected and how it is protected within an enterprise data infrastructure. This new approach also adds preemptive and predictive capabilities that help mitigate the effects of massive cyberattacks. Moreover, next-generation data protection is the last line of defense against the most vicious, unscrupulous cyber criminals who want nothing more than to take down and harm large companies, either for monetary gain or respect amongst fellow criminals. Therefore, understanding and implementing next-generation data protection is vital. ... To make data protection highly effective today for the datasets that seem most critical, it has to be highly integrated and orchestrated. You don’t want a manual process making a weak spot for your organization. To resolve this issue, one of the breakthrough capabilities of next-generation data protection is automated cyber protection. Automated cyber protection seamlessly integrates cyber storage resilience into a cyber security operation center (SOC) and data center-wide cyber security applications, such as SIEM and SOAR cyber applications. 


Federal Cyber Operations Would Downgrade Under Shutdown

The pending shutdown could trigger major cutbacks to critical technology services across the federal government, including DHS's Science and Technology Directorate, which provides technical expertise to address emerging threats impacting DHS, first responders and private sector organizations. During a lapse in appropriations, just 31 of its staff members would be retained, representing a staggering 94% reduction in its workforce. The shutdown could lead to longer airport lines, furloughs for hundreds of thousands of federal workers. Brian Fox, CTO of software supply chain management firm Sonatype, previously told Information Security Media Group that CISA plays a critical role in safeguarding government infrastructure during periods of political turbulence. "It's no secret that times of uncertainty, change and disruption are prime opportunities for threat actors to increase efforts to infiltrate systems," Fox said. The shutdown is set to begin at 12:01 a.m. on Saturday, December 21, unless lawmakers can pass a short-term spending bill, after the House rejected a compromise package Thursday night following online remarks from President-elect Donald Trump and his billionaire government efficiency advisor, Elon Musk.


Why cybersecurity is critical to energy modernization

Connected infrastructures for renewables, in many cases, are operated by new companies or even residential users. They don’t have a background in managing reliability and, generally, have very limited or no cybersecurity expertise. Despite this, they all oversee internet-connected systems that are digitally controlled and therefore vulnerable to hacking. The cumulated power controlled by many connected parties also poses a risk of blackouts. The concern is about the suppliers, especially for consumer equipment, as it is not possible to impose security regulations on consumers. The Cyber Resilience Act tries to address suppliers but is likely not sufficient. ... International collaboration is crucial in addressing the cybersecurity risks posed by interconnected energy grids. By sharing knowledge, harmonizing standards, and coordinating joint incident response efforts, countries can collectively enhance their preparedness and resilience. There are various formal international collaborations, such as ENTSO-E and the DSO Entity SEEG, coordination groups like WG8 in NIS, and partnerships between experts and authorities in groups like NCCS. International exercises led by organizations like ENISA and NATO further support these initiatives.


US Ban on TP-Link Routers More About Politics Than Exploitation Risk

While no researcher has called out a specific backdoor or zero-day vulnerability in TP-Link routers, restricting products from a country that is a political and economic rival is not unreasonable, says Thomas Pace, CEO of extended Internet of Things (IoT) security firm NetRise and a former head of cybersecurity for the US Department of Energy. ... Companies and consumers should do their due diligence, keep their devices up to date with the latest security patches, and consider whether the manufacturer of their critical hardware may have secondary motives, says Phosphorus Cybersecurity's Shankar. "The vast majority of successful attacks on IoT are enabled by preventable issues like static, unchanged default passwords, or unpatched firmware, leaving systems exposed," he says. "For business operators and consumer end-users, the key takeaway is clear: adopting basic security hygiene is a critical defense against both opportunistic and sophisticated attacks. Don’t leave the front door open." For companies worried about the origin of their networking devices or the security their supply chain, finding a trusted third party to manage the devices is a reasonable option. In reality, though, almost every device should be monitored and not trusted, says NetRise's Pace.


The Next Big Thing: How Generative AI Is Reshaping DevOps in the Cloud

One of the biggest impacts of AI on DevOps is in Continuous Integration and Continuous Delivery (CI/CD) pipelines. These pipelines help automate how code changes are managed and deployed to production environments. Automation in this area makes operations more efficient. However, as codebases grow and get more complex, these pipelines often need manual tuning and adjustments to run smoothly. AI impacts this by making pipelines smarter. It can analyze historical data, like build times, test results, and deployment patterns. By doing this, it can adjust how pipelines are set up to minimize bottlenecks and use resources better. For example, AI can decide which tests to run first. It chooses tests that are more likely to find bugs from code changes. This helps to speed up the process of testing and deploying code. ... Security has always been very important for cloud-native apps and DevOps teams. With Generative AI, we can now move from reactive to proactive when it comes to system vulnerabilities. Instead of just waiting for security issues to appear, AI helps DevOps teams spot and prevent potential risks ahead of time. AI-powered security tools can perform data analysis on a company’s cloud system. 


US order is a reminder that cloud platforms aren’t secure out of the box

Affected IT departments are ordered to implement a set of baseline configurations set out by the Secure Cloud Business Applications (SCuBA) project for certain software as a service (SaaS) platforms. So far, the directive notes, the only final configuration baseline set is for Microsoft 365. There is also a baseline configuration for Google Workspace listed on the SCuBA website that isn’t mentioned in this week’s directive. However, the order does say that in the future, CISA may release additional SCuBA Secure Configuration Baselines for other cloud products. When the baselines are issued, they will also will fall under the scope of this week’s directive. ... Coincidentally, the CISA directive comes the same week as CSO reported that Amazon has halted its deployment of M365 for a full year, as Microsoft tries to fix a long list of security problems that Amazon identified. A CISA spokesperson said he couldn’t comment on why the directive was issued this week, but Dubrovsky believes it’s “more of a generic warning” to federal departments, and not linked to an event. Asked how private-sector CISOs should secure cloud platforms, Dubrovsky said they should start with cybersecurity basics. That includes implementing tough identity and access management policies, including MFA, and performing network monitoring and alerting for abnormalities, before going into the cloud.


The value of generosity in leadership

For the first time we have five generations in the workforce, which means that needs, priorities, and sources of meaning vary. Generosity becomes much more important because you cannot achieve everything by yourself. You can only do that by empowering others and giving them the tools, opportunities, and trust they need to succeed. And then, hopefully, they can together fulfill the organization’s purpose, objectives, and dreams. ... The opposite of a generous leader is a narcissistic leader, who is focused on themselves. Narcissistic leaders are not as effective as leaders who have higher EQs [emotional quotients], who are more generous and recognize that the team’s performance is a result of something beyond themselves. But for one reason or another, narcissistic leaders continue to rise to the top. ... That link between being generous with yourself and being generous with others is so important. When I’ve seen leaders really unlock a new level of leadership, and generosity in leadership, it comes from first and foremost understanding how to lead themselves, and specifically, how to control the amygdala hijack that can send you below the line. Those are very real physiological tendencies that can create what appears to be a zero-sum context based on winning and losing. 



Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman

Daily Tech Digest - December 20, 2024

The Top 25 Security Predictions for 2025

“Malicious actors will go full throttle in mining the potential of AI in making cyber crime easier, faster and deadlier. But this emerging and ever-evolving technology can also be made to work for enterprise security and protection by harnessing it for threat intelligence, asset profile management, attack path prediction and remediation guidance. As SOCs catch up to secure innovations still and yet unraveling, protecting enterprises from tried and tested modes of attack remains essential. While innovation makes for novel ways to strike, criminals will still utilize what is easy and what has worked for them for years.” ... Organizations are urged to embrace scalable, cloud-native security information and event management (SIEM) solutions. These tools improve threat detection and response by integrating logs from cloud and endpoint systems and automating incident management with security orchestration, automation, and response (SOAR) features. ... While targets like edge devices will continue to capture the attention of threat actors, there’s another part of the attack surface that defenders must pay close attention to over the next few years: their cloud environments. Although cloud isn’t new, it’s increasingly piquing the interest of cyber criminals. 


Why AI language models choke on too much text

Although RNNs have fallen out of favor since the invention of the transformer, people have continued trying to develop RNNs suitable for training on modern GPUs. In April, Google announced a new model called Infini-attention. It’s kind of a hybrid between a transformer and an RNN. Infini-attention handles recent tokens like a normal transformer, remembering them and recalling them using an attention mechanism. However, Infini-attention doesn’t try to remember every token in a model’s context. Instead, it stores older tokens in a “compressive memory” that works something like the hidden state of an RNN. This data structure can perfectly store and recall a few tokens, but as the number of tokens grows, its recall becomes lossier. ... Transformers are good at information recall because they “remember” every token of their context—this is also why they become less efficient as the context grows. In contrast, Mamba tries to compress the context into a fixed-size state, which necessarily means discarding some information from long contexts. The Nvidia team found they got the best performance from a hybrid architecture that interleaved 24 Mamba layers with four attention layers. This worked better than either a pure transformer model or a pure Mamba model.


The End of ‘Apps,’ Brought to You by AI?

Achieving the dream of a unified customer experience is possible, not by building a bigger app but by AI super agents. Much of the groundwork has already been done: AI language models like Claude and GPT-4 are already designed to support many use cases, and Agentic AI takes that concept further. OpenAI, Google, Amazon, and Meta are all making general-purpose agents that can be used by anyone for any purpose. In theory, we might eventually see a vast network of specialized AI agents running in integration with each other. These could even serve customers’ needs within the familiar interfaces they already use. Crucially, personalization is the big selling point. It’s the reason AI super agents may succeed where super apps failed in the West. A super agent wouldn’t just aggregate services or fetch a gadget’s price when prompted. It would compare prices across frequented platforms, apply discounts, or suggest competing gadgets based on reviews you’ve left for previous models. ... This new ‘super agents’ reality would yield significant benefits for developers, too, possibly even redefining what it means to be a developer. While lots of startups invent good ideas daily, the reality of the software business is that you’re always limited by the number of developers available. 


A Starter’s Framework for an Automation Center of Excellence

An automation CoE is focused on breaking down enterprise silos and promoting automation as a strategic investment imperative for achieving long-term value. It helps to ensure that when teams want to create new initiatives, they don’t duplicate previous efforts. There are various cost, efficiency and agility benefits to setting up such an entity in the enterprise. ... Focus on projects that deliver maximum impact with minimal effort. Use a clear, repeatable process to assess ROI — think about time saved, revenue gained and risks reduced versus the effort and complexity required. A simple question to ask is, “Is this process ready for automation, and do we have the right tools to make it work?” ... Your CoE needs a solid foundation. Select tools and systems that integrate seamlessly with your organization’s architecture. It might seem challenging at first, but the long-term cultural and technical benefits are worth it. Ensure your technology supports scalability as automation efforts grow. ... Standardize automation without stifling team autonomy. Striking this balance is key. Consider appointing both a business leader and a technical evangelist to champion the initiative and drive adoption across the organization. Clear ownership and guidelines will keep teams aligned while fostering innovation.


What is data architecture? A framework to manage data

The goal of data architecture is to translate business needs into data and system requirements, and to manage data and its flow through the enterprise. Many organizations today are looking to modernize their data architecture as a foundation to fully leverage AI and enable digital transformation. Consulting firm McKinsey Digital notes that many organizations fall short of their digital and AI transformation goals due to process complexity rather than technical complexity. ... While both data architecture and data modeling seek to bridge the gap between business goals and technology, data architecture is about the macro view that seeks to understand and support the relationships between an organization’s functions, technology, and data types. Data modeling takes a more focused view of specific systems or business cases. ... Modern data architectures must be scalable to handle growing data volumes without compromising performance. A scalable data architecture should be able to scale up and to scale out. ... ... Modern data architectures must ensure data remains accurate, consistent, and unaltered through its lifecycle to preserve its reliability for analysis and decision-making. They must prevent issues like data corruption, duplication, or loss.


Cybersecurity At the Crossroads: The Role Of Private Companies In Safeguarding U.S. Critical Infrastructure

Regulation alone is not a solution, but it does establish baseline security standards and provide much-needed funding to support defenses. Standards have come a long way and are relatively mature. Though there is still a tremendous amount of gray area, and a lack of relevance or attainability for certain industries and smaller organizations. The federal government must prioritize injecting funds into cybersecurity initiatives, ensuring that even the smallest entities managing critical infrastructure can implement strong security measures. With this funding, we must build a strong defense posture and cyber resiliency within these private sector organizations. This involves more than deploying advanced tools; it requires developing skilled personnel capable of responding to incidents and defending against attacks. Upskilling programs should focus on blue teaming and incident response, ensuring that organizations have the expertise to manage their security proactively.A critical component of effective cybersecurity is understanding and applying the standard risk formula: Risk = Threat x Vulnerability x Consequence. This formula emphasizes that risk is determined by evaluating the likelihood of an attack (Threat), the weaknesses in defenses (Vulnerability), and the potential impact of a breach (Consequence). 


Achieving Network TCO

TCO discussion should shift from a unilateral cost justification (and payback) of technology that is being proposed to a discussion of what the opportunity costs for the business will be if a network infrastructure investment is canceled or delayed. If a company determines strategically to decentralize manufacturing and distribution but is also wary of adding headcount, it's going to seek out edge computing and network automation. It’s also likely to want robust security at its remote sites, which means investments in zero-trust networks and observability software that can assure that the same level of enterprise security is being applied at remote sites as it is at central headquarters. In cases like this, it shouldn’t be the network manager or even the CIO who is solely responsible for making the budget case for network investments. Instead, the network technology investments should be packaged together in the total remote business recommendation and investment that other C-level executives argue for with the CIO and/or network manager, HR, and others. In this scenario, the TCO of a network technology investment is weighed against the cost of not doing it at all and missing a corporate opportunity to decentralize operations, which can’t be accomplished without the technology that is needed to run it.


The coming hardware revolution: How to address AI’s insatiable demands

The US forecast for energy consumption on AI is alarming. Today’s AI queries require roughly 10x the electricity of traditional Google queries - a ChatGPT request runs 10x watt-hours versus a Google request. A typical CPU in a data center uses approximately 300 watts per hour (Electric Power Research Institute), while a Nvidia H100 GPU uses up to 700 watts per hour, a similar usage of an average household in the US per month. Advancements in AI model capabilities, and greater use of parameters, continue to drive energy consumption higher. Much of this demand is centralized in data centers as companies like Amazon, Microsoft, Google, and Meta build more and more massive hyperscale facilities all over the country. US data center electricity consumption is projected to grow 125 percent by 2030, using nine percent of all national electricity. ... While big tech companies certainly have the benefit of incumbency and funding advantage, the startup ecosystem will play an absolutely crucial role in driving the innovation necessary to enable the future of AI. Large public tech companies often have difficulty innovating at the same speed as smaller, more nimble startups.


Agents are the 'third wave' of the AI revolution

"Agentic AI will be the next wave of unlocked value at scale," Sesh Iyer, managing director and senior partner with BCG X, Boston Consulting Group's tech build and design unit, told ZDNET. ... As with both analytical and gen AI, AI agents need to be built with and run along clear ethical and operational guidelines. This includes testing to minimize errors and a governance structure. As is the case with all AI instances, due diligence to ensure compliance and fairness is also a necessity for agents, Iyer said. As is also the case with broader AI, the right skills are needed to design, build and manage AI agents, he continued. Such talent is likely already available within many organizations, with the domain knowledge needed, he added. "Upskill your workforce to manage and use agentic AI effectively. Developing internal expertise will be key to capturing long-term value from these systems." ... To prepare for the shift from gen AI to agentic AI, "start small and scale strategically," he advises. "Identify a few high-impact use cases -- such as customer service -- and run pilot programs to test and refine agent capabilities. Alongside these use cases, understand the emerging platforms and software components that offer support for agentic AI."


Having it both ways – bringing the cloud to on-premises data storage

“StaaS is an increasingly popular choice for organisations, with demand only likely to grow soon. The simple reason for this is two-fold: it provides both convenience and simplicity,” said Anthony Cusimano, Director of Technical Marketing at Object First, a supplier of immutable backup storage appliances. There is more than one flavour of on-premises StaaS, as was pointed out by A3 Communications panel member Camberley Bates, Chief Technology Advisor at IT research and advisory firm The Futurum Group. Bates pointed out that the two general categories of on-premises StaaS service are Managed and Non-Managed StaaS. Managed StaaS sees vendors handling the whole storage stack, by both implementing and then fully managing storage systems on customers’ premises. However, Bates said enterprises are more attracted to Non-Managed StaaS. ... “Non-managed StaaS has become surprisingly of interest in the market. This is because enterprises buy it ‘once’ and do not have to go back for a capex request over and over again. Rather, it becomes a monthly bill that they can true-up over time. We have found the fully managed offering of less interest, with enterprises opting to use their own resources to handle the storage management,” continued Bates.



Quote for the day:

“If you don’t try at anything, you can’t fail… it takes back bone to lead the life you want” -- Richard Yates

Daily Tech Digest - December 19, 2024

How AI-Empowered ‘Citizen Developers’ Help Drive Digital Transformation

To compete in the future, companies know they need more IT capabilities, and the current supply chain has failed to provide the necessary resources. The only way for companies to fill the void is through greater emphasis on the skill development of their existing staff — their citizens. Imagine two different organizations. Both have explicit initiatives underway to digitally transform their businesses. In one, the IT organization tries to carry the load by itself. There, the mandate to digitize has only created more demand for new applications, automations, and data analyses — but no new supply. Department leaders and digitally oriented professionals initially submitted request after request, but as the backlog grew, they became discouraged and stopped bothering to ask when their solutions would be forthcoming. After a couple of years, no one even mentioned digital transformation anymore. In the other organization, digital transformation was a broad organizational mandate. IT was certainly a part of it and had to update a variety of enterprise transaction systems as well as moving most systems to the cloud. They had their hands full with this aspect of the transformation. Fortunately, in this hypothetical company, many citizens were engaged in the transformation process as well. 


Things CIOs and CTOs Need To Do Differently in 2025

“Because the nature of the threat that organizations face is increasing all the time, the tooling that’s capable of mitigating those threats becomes more and more expensive,” says Logan. “Add to that the constantly changing privacy security rules around the globe and it becomes a real challenge to navigate effectively.” Also realize that everyone in the organization is on the same team, so problems should be solved as a team. IT leadership is in a unique position to help break down the silos between different stakeholder groups. ... CIOs and CTOs face several risks as they attempt to manage technology, privacy, ROI, security, talent and technology integration. According to Joe Batista, chief creatologist, former Dell Technologies & Hewlett Packard Enterprise executive, senior IT leaders and their teams should focus on improving the conditions and skills needed to address such challenges in 2025 so they can continue to innovate. “Keep collaborating across the enterprise with other business leaders and peers. Take it a step further by exploring how ecosystems can impact your business agenda,” says Batista. “Foster an environment that encourages taking on greater risks. The key is creating a space where innovation can thrive, and failures are steppingstones to success.”


5 reasons why 2025 will be the year of OpenTelemetry

OTel was initially targeted at cloud-native applications, but with the creation of a special interest group within OpenTelemetry focused on the continuous integration and continuous delivery (CI/CD) application development pipeline, OTel becomes a more powerful, end-to-end tool. “CI/CD observability is essential for ensuring that software is released to production efficiently and reliably,” according to project lead Dotan Horovits. “By integrating observability into CI/CD workflows, teams can monitor the health and performance of their pipelines in real-time, gaining insights into bottlenecks and areas that require improvement.” He adds that open standards are critical because they “create a common uniform language which is tool- and vendor-agnostic, enabling cohesive observability across different tools and allowing teams to maintain a clear and comprehensive view of their CI/CD pipeline performance.” ... The explosion of interest in AI, genAI, and large language models (LLM) is creating an explosion in the volume of data that is generated, processed and transmitted across enterprise networks. That means a commensurate increase in the volume of telemetry data that needs to be collected in order to make sure AI systems are operating efficiently.


The Importance of Empowering CFOs Against Cyber Threats

Today's CFOs must be collaborative leaders, willing to embrace an expanding role that includes protecting critical assets and securing the bottom line. To do this, CFOs must work closely with chief information security officers (CISOs), due to the sophistication and financial impact of cyberattacks. ... CFOs are uniquely positioned to understand the potential financial devastation from cyber incidents. The costs associated with a breach extend beyond immediate financial losses, encompassing longer-term repercussions, such as reputational damage, legal liabilities, and regulatory fines. CFOs must measure and consider these potential financial impacts when participating in incident response planning. ... The regulatory landscape for CFOs has evolved significantly beyond Sarbanes-Oxley. The Securities and Exchange Commission's (SEC's) rules on cybersecurity risk management, strategy, governance, and incident disclosure have become a primary concern for CFOs and reflect the growing recognition of cybersecurity as a critical financial and operational risk. ... Adding to the complexity, the CFO is now a cross-functional collaborator who must work closely with IT, legal, and other departments to prioritize cyber initiatives and investments. 


Community Banks Face Perfect Storm of Cybersecurity, Regulatory and Funding Pressures

Cybersecurity risks continue to cast a long shadow over technological advancement. About 42% of bankers expect cybersecurity risks to pose their most difficult challenge in implementing new technologies over the next five years. This concern is driving many institutions to take a cautious approach to emerging technologies like artificial intelligence. ... Banks express varying levels of satisfaction with their technology services. Asset liability management and interest rate risk technologies receive the highest satisfaction ratings, with 87% and 84% of respondents respectively reporting being “extremely” or “somewhat” satisfied. However, workflow processing and core service provider services show room for improvement, with less than 70% of banks expressing satisfaction with these areas. ... Compliance costs continue to consume a significant portion of bank resources. Legal and accounting/auditing expenses related to compliance saw notable increases, with both categories rising nearly 4 percentage points as a share of total expenses. The implementation of the current expected credit loss (CECL) accounting standard has contributed to these rising costs.


Dark Data Explained

Dark data often lies dormant and untapped, its value obscured by poor quality and disorganization. Yet within these neglected reservoirs of information lies the potential for significant insights and improved decision-making. To unlock this potential, data cleaning and optimization become vital. Cleaning dark data involves identifying and correcting inaccuracies, filling in missing entries, and eliminating redundancies. This initial step is crucial, as unclean data can lead to erroneous conclusions and misguided strategies. Optimization furthers the process by enhancing the usability and accessibility of the data. Techniques such as data transformation, normalization, and integration play pivotal roles in refining dark data. By transforming the data into standardized formats and ensuring it adheres to consistent structures, companies and researchers can more effectively analyze and interpret the information. Additionally, integration across different data sets and sources can uncover previously hidden patterns and relationships, offering a comprehensive view of the phenomenon being studied. By converting dark data through meticulous cleaning and sophisticated optimization, organizations can derive actionable insights and add substantial value. 


In potential reversal, European authorities say AI can indeed use personal data — without consent — for training

The European Data Protection Board (EDPB) issued a wide-ranging report on Wednesday exploring the many complexities and intricacies of modern AI model development. It said that it was open to potentially allowing personal data, without owner’s consent, to train models, as long as the finished application does not reveal any of that private information. This reflects the reality that training data does not necessarily translate into the information eventually delivered to end users. ... “Nowhere does the EDPB seem to look at whether something is actually personal data for the AI model provider. It always presumes that it is, and only looks at whether anonymization has taken place and is sufficient,” Craddock wrote. “If insufficient, the SA would be in a position to consider that the controller has failed to meet its accountability obligations under Article 5(2) GDPR.” And in a comment on LinkedIn that mostly supported the standards group’s efforts, Patrick Rankine, the CIO of UK AI vendor Aiphoria, said that IT leaders should stop complaining and up their AI game. “For AI developers, this means that claims of anonymity should be substantiated with evidence, including the implementation of technical and organizational measures to prevent re-identification,” he wrote, noting that he agrees 100% with this sentiment. 


Software Architecture and the Art of Experimentation

While we can’t avoid being wrong some of the time, we can reduce the cost of being wrong by running small experiments to test our assumptions and reverse wrong decisions before their costs compound. But here time is the enemy: there is never enough time to test every assumption and so knowing which ones to confront is the art in architecting. Successful architecting means experimenting to test decisions that affect the architecture of the system, i.e. those decisions that are "fatal" to the success of the thing you are building if you are wrong. ... If you don’t run an experiment you are assuming you already know the answer to some question. So long as that’s the case, or so long as the risk and cost of being wrong is small, you may not need to experiment. Some big questions, however, can only be answered by experimenting. Since you probably can’t run experiments for all the questions you have to answer, implicitly accepting the associated risk, so you need to make a trade-off between the number of experiments you can run and the risks you won’t be able to mitigate by experimenting. The challenge in creating experiments that test both the MVP and MVA is asking questions that challenge the business and technical assumptions of both stakeholders and developers. 


5 job negotiation tips for CAIOs

As you discuss base, bonus, and equity, be specific and find out exactly what their pay range actually is for this emerging role and how that compares with market rates for your location. For example, some recruiters may give you a higher number early on in discussions, and then once you’re well bought-in to the company after several interviews, the final offer may throttle things back. ... Set clear expectations early, and be prepared to withdraw your candidacy if any downward-revised amount later on is too far below your household needs. ... As a CAIO, you don’t want to be measured the same as the lines of business, or penalized if they fall short of quarterly or yearly sales targets. Ensure your performance metrics are appropriate for the role and the balance you’ll need to strike between near-term and longer-term objectives. For certain, AI should enable near-term productivity improvements and cost savings, but it should also enable longer-term revenue growth via new products and services, or enhancements to existing offerings. ... Companies sometimes place a clause in their legal agreement that states they own all pre-existing IP. Get that clause removed and itemize your pre-existing IP if needed to ensure it stays under your ownership. 


Leadership skills for managing cybersecurity during digital transformation

First, security must be top of mind as all new technologies are planned. As you innovate, ensure that security is built into deployments, and options chosen that match your business risk profile and organization’s values. For example, consider enabling the max security features that come with many IoT, such as forcing the change of default passwords, patching devices and ensuring vulnerabilities can be addressed. Likewise, ensure that AI applications are ethically sound, transparent, and do not introduce unintended biases. Second, a comprehensive risk assessment should be performed on the current network and systems environment as well as on the future planned “To Be” architecture. ... Digital transformation also demands leaders who are not only technically adept but also visionary in guiding their organizations through change. Leaders must be able to inspire a digital culture, align teams with new technologies, and drive strategic initiatives that leverage digital capabilities for competitive advantage. Finally, leaders must be life-long learners who constantly update their skills and forge strong relationships across their organzation for this new digitally-transformed environment.



Quote for the day:

"Don’t watch the clock; do what it does. Keep going." -- Sam Levenson

Daily Tech Digest - December 18, 2024

The AI-Powered IoT Revolution: Are You Ready?

AI not only reduces the cost and latency of these operations but also provides actionable intelligence, enabling smarter decisions that enhance business efficiency by preventing downtimes, minimizing losses, improving sales, and unlocking a range of benefits tailored to specific use cases. Building on this synergy, AI on Edge—where AI processes run directly on edge devices such as IoT sensors, cameras, and smartphones rather than relying solely on cloud computing—will see significant adoption by 2025. By processing data locally, edge AI enables real-time decision-making, eliminating delays caused by data transmission to and from the cloud. This capability will transform applications like autonomous vehicles, industrial automation, and healthcare devices, where fast, reliable responses are mission-critical. Moreover, AI on Edge enhances privacy and security by keeping sensitive data on the device, reduces cloud costs and bandwidth usage, and supports offline functionality in remote or connectivity-limited environments. These advantages make it an attractive option for organizations seeking to push the boundaries of innovation while delivering superior user experiences and operational efficiency. 


Key strategies to enhance cyber resilience

To bolster resilience consider developing stakeholder specific playbooks, Wyatt says. Different teams play different roles in incident response from detecting risk, deploying key controls, maintaining compliance, recovery and business continuity. Expect that each stakeholder group will have their own requirements and set of KPIs to meet, she says. “For example, the security team may have different concerns than the IT operations team. As a result, organizations should draft cyber resilience playbooks for each set of stakeholders that provide very specific guidance and ROI benefits for each group.” ... Cyber resilience is as much about the ability to recover from a major security incident as it is about proactively preparing, preventing, detecting and remediating it. That means having a formal disaster recovery plan, doing regular offsite back-ups of all critical systems and testing both the plan and the recovery process on a frequent basis. ... Boards have become very focused on managing risk and have become increasingly fluent in cyber risk. But many boards are surprised that when a crisis occurs, broader operational resilience is not a point of these discussions, according to Wyatt. Bring your board along by having external experts walk through previous events and break down the various areas of impact. 


Smarter devops: How to avoid deployment horrors

Finding security issues post-deployment is a major risk, and many devops teams shift-left security practices by instituting devops security non-negotiables. These are a mix of policies, controls, automations, and tools, but most importantly, ensuring security is a top-of-mind responsibility for developers. ... “Integrating security and quality controls as early as possible in the software development lifecycle is absolutely necessary for a functioning modern devops practice,” says Christopher Hendrich, associate CTO of SADA. “Creating a developer platform with automation, AI-powered services, and clear feedback on why something is deemed insecure and how to fix it helps the developer to focus on developing while simultaneously strengthening the security mindset.” ... “Software development is a complex process that gets increasingly challenging as the software’s functionality changes or ages over time,” says Melissa McKay, head of developer relations at JFrog. Implementing a multilayered, end-to-end approach has become essential to ensure security and quality are prioritized from initial package curation and coding to runtime monitoring.”


How Do We Build Ransomware Resilience Beyond Just Backups?

While email filtering tools are essential, it’s unrealistic to expect them to block every malicious message. As such, another important step is educating your end users on identifying phishing emails and other suspicious content that make it through the filters. User education is one of those things that should be an ongoing effort, not a one-time initiative. Regular training sessions help reinforce best practices and keep security in focus. To complement training, consider using phishing attack simulators. Several vendors offer tools that generate harmless, realistic-looking phishing messages and send them to your users. Microsoft 365 even includes a phishing simulation tool. ... Limiting user permissions is vital because ransomware operates with the permissions of the user who triggers the attack. As such, users should only have access to the resources they need to perform their jobs—no more, no less. If a user doesn’t have access to a specific resource, the ransomware won’t be able to encrypt it. Moreover, consider isolating high-value data on storage systems that require additional authentication. Doing so reduces exposure if ransomware spreads.


Azure Data Factory Bugs Expose Cloud Infrastructure

The Airflow instance's use of default, unchangeable configurations combined with the cluster admin role's attachment to the Airflow runner "caused a security issue" that could be manipulated "to control the Airflow cluster and related infrastructure," the researchers explained. If an attacker was able to breach the cluster, they also could manipulate Geneva, allowing attackers "to potentially tamper with log data or access other sensitive Azure resources," Unit 42 AI and security research manager Ofir Balassiano and senior security researcher David Orlovsky wrote in the post. Overall, the flaws highlight the importance of managing service permissions and monitoring the operations of critical third-party services within a cloud environment to prevent unauthorized access to a cluster. ... Attackers have two ways to gain access to and tamper with DAG files. They could gain write permissions to the storage account containing DAG files by leveraging a principal account with write permissions; or they could use a shared access signature (SAS) token, which grants temporary and limited access to a DAG file. In this scenario, once a DAG file is tampered with, "it lies dormant until the DAG files are imported by the victim," the researchers explained. The second way is to gain access to a Git repository using leaked credentials or a misconfigured repository.


Whatever happened to the three-year IT roadmap?

“IT roadmaps are now shorter, typically not exceeding two years, due to the rapid pace of technological change,” he says. “This allows for more flexibility and adaptability in IT planning.” Kellie Romack, chief digital information officer of ServiceNow, is also shortening her horizon to align with the two- or three-year timeframe that is the norm for her company. Doing so keeps her focused on supporting the company’s overall future strategy but with enough flexibility to adjust along the journey. “That timeframe is a sweet spot that allows us to set a ‘dream big’ strategy with room to be agile, so we can deliver and push the limits of what’s possible,” she says. “The pace of technological change today is faster than it’s ever been, and if IT leaders aren’t looking around the corner now, it’s possible they’ll fall behind and never catch up.” ... “A roadmap is still a useful tool to provide that north star, the objectives and the goals you’re trying to achieve, and some sense of how you’ll get to those goals,” McHugh says. Without that, McHugh says CIOs won’t consistently deliver what’s needed when it’s needed for their organizations, nor will they get IT to an optimal advanced state. “If you don’t have a goal or an outcome, you’re going to go somewhere, we can promise you that, but you’re not going to end up in a specific location,” she adds.


Innovations in Machine Identity Management for the Cloud

Non-human identities are critical components within the digital landscape. They enable machine-to-machine communications, providing an array of automated services that underpin today’s digital operations. However, their growing prevalence means they are also becoming prime targets for cyber threats. Are existing cybersecurity strategies equipped to address this issue? Acting as agile guardians, NHI management platforms offer promising solutions, securing both the identities and their secrets from potential threats and vulnerabilities. By placing equal emphasis on the management of both human and non-human identities, businesses can create a comprehensive cybersecurity strategy that matches the complexity and diversity of today’s digital threats. ... When unsecured, NHIs become hotbeds for cybercriminals who manipulate these identities to procure unauthorized access to sensitive data and systems. For companies regularly transacting in consumer data (like in healthcare or finance), the unauthorized access and sharing of sensitive data can lead to hefty penalties due to non-compliant data management practices. An effective NHI management strategy acts as a pivotal control over cloud security. 


From Crisis to Control: Establishing a Resilient Incident Response Framework for Deployed AI Models

An effective incident response framework for frontier AI companies should be comprehensive and adaptive, allowing quick and decisive responses to emerging threats. Researchers at the Institute for AI Policy and Strategy (IAPS) have proposed a post-deployment response framework, along with a toolkit of specific incident responses. The proposed framework consists of four stages: prepare, monitor and analyze, execute, and recovery and follow up. ... Developers have a variety of actions available to them to contain and mitigate the harms of incidents caused by advanced AI models. These tools offer a variety of response mechanisms that can be executed individually or in combination with one another, allowing developers to tailor specific responses based on the incident's scope and severity. ... Frontier AI companies have recently provided more transparency to their internal policies regarding safety, including Responsible Scaling Policies (RSPs) published by Anthropic, Google DeepMind, and OpenAI. When it comes to responding to post-deployment incidents, all three RSPs lack clear, detailed, and actionable plans for responding to post-deployment incidents. 


We’re Extremely Focused on Delivering Value Sustainably — NIH CDO

Speaking of challenges in her role as the CDO, Ramirez highlights managing a rapidly growing data portfolio. She stresses the importance of fostering partnerships and ensuring the platform’s accessibility to those aiming to leverage its capabilities. One of the central hurdles has been effectively communicating the portfolio’s offerings and predicting data availability for research purposes. She describes the critical need to align funding and partnerships to support delivery timelines of 12 to 24 months, a task that demanded strong leadership from the coordinating center. This dual role of ensuring readiness and delivery has been both a challenge and a success. Ramirez shares that the team has grown more adept at framing research data as a product of their system, ready to meet the needs of collaborators. She also expresses enthusiasm for working with partners to demonstrate the platform’s benefits and efficiencies in advancing research objectives. Sharing AI literacy and upskilling initiatives in the organization, Ramirez mentions building a strong sense of community among data professionals. She highlights efforts to establish a community of practice that brings together individuals working in their federal coordinating center and awardees who specialize in data science and systems.


5 Questions Your Data Protection Vendor Hopes You Don’t Ask

Data protection vendors often rely on high-level analysis to detect unusual activity in backups or snapshots. This includes threshold analysis, identifying unusual file changes, or detecting changes in compression rates that may suggest ransomware encryption. These methods are essentially guesses prone to false positives. During a ransomware attack, details matter. ... Organizations snapshot or back up data regularly, ranging from hourly to daily intervals. When an attack occurs, restoring a snapshot or backup overwrites production data—some of which may have been corrupted by ransomware—with clean data. If only 20% of the data in the backup has been manipulated by bad actors, recovering the full backup or snapshot will result in overwriting 80% of data that did not need restoration. ... Cybercriminals understand that databases are the backbone of many businesses, making them prime targets for extortion. By corrupting these databases, they can pressure organizations into paying ransoms. ... AI is now a mainstream topic, but understanding how an AI engine is trained is critical to evaluating its effectiveness. When dealing with ransomware, it's important that the AI is trained on real ransomware variants and how they impact data.



Quote for the day:

"The essence of leadership is the capacity to build and develop the self-esteem of the workers." -- Irwin Federman