Daily Tech Digest - December 23, 2024

‘Orgs need to be ready’: AI risks and rewards for cybersecurity in 2025

“In 2025, we expect to see more AI-driven cyberthreats designed to evade detection, including more advanced evasion techniques bypassing endpoint detection and response (EDR), known as EDR killers, and traditional defences,” Khalid argues. “Attackers may use legitimate applications like PowerShell and remote access tools to deploy ransomware, making detection harder for standard security solutions.” On a more frightening note, Michael Adjei, director of systems engineering at Illumio, believes that AI will offer somewhat of a field day for social engineers, who will trick people into actually creating breaches themselves: “Ordinary users will, in effect, become unwitting participants in mass attacks in 2025. ... “With greater adoption of AI will come increased cyberthreats, and security teams need to remain nimble, confident and knowledgeable.” Similarly, Britton argues that teams “will need to undergo a dedicated effort around understanding how [AI] can deliver results”. “To do this, businesses should start by identifying which parts of their workflows are highly manual, which can help them determine how AI can be overlaid to improve efficiency. Key to this will be determining what success looks like. Is it better efficiency? Reduced cost?”


Will we ever trust robots?

The chief argument for robots with human characteristics is a functional one: Our homes and workplaces were built by and for humans, so a robot with a humanlike form will navigate them more easily. But Hoffman believes there’s another reason: “Through this kind of humanoid design, we are selling a story about this robot that it is in some way equivalent to us or to the things that we can do.” In other words, build a robot that looks like a human, and people will assume it’s as capable as one. In designing Alfie’s physical appearance, Prosper has borrowed some aspects of typical humanoid design but rejected others. Alfie has wheels instead of legs, for example, as bipedal robots are currently less stable in home environments, but he does have arms and a head. The robot will be built on a vertical column that resembles a torso; his specific height and weight are not yet public. He will have two emergency stop buttons. Nothing about Alfie’s design will attempt to obscure the fact that he is a robot, Lewis says. “The antithesis [of trustworthiness] would be designing a robot that’s intended to emulate a human … and its measure of success is based on how well it has deceived you,” he told me. “Like, ‘Wow, I was talking to that thing for five minutes and I didn’t realize it’s a robot.’ That, to me, is dishonest.”


My Personal Reflection on DevOps in 2024 and Looking Ahead to 2025

As we move into 2025, the big stories that dominated 2024 will continue to evolve. We can expect AI—particularly generative AI—to become even more deeply ingrained in the DevOps toolchain. Prompt engineering for AI models will likely emerge as a specialized skill, just as writing Docker files was a skill set that distinguished DevOps engineers a decade ago. Agentic AI will become the norm with teams of agents taking on the tasks that lower level workers once performed. On the policy side, escalating regulatory demands will push enterprises to adopt more stringent compliance frameworks, integrating AI-driven compliance-as-code tools into their pipelines. Platform engineering will mature, focusing on standardization and the creation of “golden paths” that offer best practices out of the box. We may also see a consolidation of DevOps tool vendors as the market seeks integrated, end-to-end platforms over patchwork solutions. The focus will be on usability, quality, security and efficiency—attributes that can only be realized through cohesive ecosystems rather than fragmented toolchains. Sustainability will also factor into 2025’s narrative. As environmental concerns shape global economic policies and public sentiment, DevOps teams will take resource optimization more seriously. 


From Invisible UX to AI Governance: Kanchan Ray, CTO, Nagarro Shares his Vision for a Connected Future

Vision and data derived from videos have become integral to numerous industries, with machine vision playing a crucial role in automating business processes. For instance, automatic inventory management, often supported by robots, is transitioning from experimental to mainstream. Machine vision also enhances security and safety by replacing human monitoring with machines that operate around the clock, offering greater accuracy at a lower cost. On the consumer front, virtual try-ons and AI-assisted mirrors have become standard features in reputable retail outlets, both in physical stores and online platforms. ... Traditional boundaries of security, which once focused on standard data security, governance, and IT protocols, are now fluid and dynamic. The integration of AI, data analytics, and machine learning has created diverse contexts for output consumption, resulting in new business operations around model simulations and decision-making related to model pipelines. These operations include processes like model publishing, hyperparameter observability, and auditing model reasoning, all of which push the boundaries of AI responsibility.


If your AI-generated code becomes faulty, who faces the most liability exposure?

None of the lawyers, though, discussed who is at fault if the code generated by an AI results in some catastrophic outcome. For example: The company delivering a product shares some responsibility for, say, choosing a library that has known deficiencies. If a product ships using a library that has known exploits and that product causes an incident that results in tangible harm, who owns that failure? The product maker, the library coder, or the company that chose the product? Usually, it's all three. ... Now add AI code into the mix. Clearly, most of the responsibility falls on the shoulders of the coder who chooses to use code generated by an AI. After all, it's common knowledge that the code may not work and needs to be thoroughly tested. In a comprehensive lawsuit, will claimants also go after the companies that produce the AIs and even the organizations from which content was taken to train those AIs (even if done without permission)? As every attorney has told me, there is very little case law thus far. We won't really know the answers until something goes wrong, parties wind up in court, and it's adjudicated thoroughly. We're in uncharted waters here. 


5 Signs You’ve Built a Secretly Bad Architecture (And How to Fix It)

Dependencies are the hidden traps of software architecture. When your system is littered with them — whether they’re external libraries, tightly coupled modules, or interdependent microservices — it creates a tangled web that’s hard to navigate. They make the system difficult to debug locally. Every change risks breaking something else. Deployments take more time, troubleshooting takes longer, and cascading failures are a real threat. The result? Your team spends more time toiling and less time innovating. ... Reducing dependencies doesn’t mean eliminating them entirely or splitting your system into nanoservices. Overcorrecting by creating tiny, hyper-granular services might seem like a solution, but it often leads to even greater complexity. In this scenario, you’ll find yourself managing dozens — or even hundreds — of moving parts, each requiring its own maintenance, monitoring, and communication overhead. Instead, aim for balance. Establish boundaries for your microservices that promote cohesion, avoiding unnecessary fragmentation. Strive for an architecture where services interact efficiently but aren’t overly reliant on each other, which increases the flexibility and resilience of your system.


The 4 key aspects of a successful data strategy

Without a data strategy to structure various efforts, the value added from data in any organization of a certain size or complexity falls far short of the possibilities. In such cases, data is only used locally or aggregated along relatively rigid paths. The result? The company’s agility in terms of necessary changes remains inhibited. In the absence of such a strategy, technical concepts and architectures can hardly increase this value either. A well-thought-out data strategy can be formulated in various ways. It encompasses several different facets, such as availability, searchability, security, protection of personal data, cost control, etc. However, four key aspects that form the basis for a data strategy can be identified from a variety of data-related projects: identity, bitemporality, networking and federalism. ... A data strategy also determines how companies encode the knowledge about their products, services, processes and business models. This makes solutions possible that also allow for automated decision support. To sell glasses online, a lot of specialized optician knowledge must be encoded so that the customer does not make serious mistakes when configuring their glasses. The optimal size of the progressive lenses depends, among other things, on the visual acuity and the lens geometry. 


Maximizing the impact of cybercrime intelligence on business resilience

An intelligence capability is only as effective as its coverage of the adversary. A robust program ensures historical coverage for context, near-real-time coverage for timely responses to immediate threats, and depth of coverage for sufficient understanding. Cybercrime intelligence coverage encompasses both human and technical data. Valuable sources of information include any platforms where cybercriminals gather to communicate, coordinate, or trade, such as social networks, chatrooms, forums and direct one-on-one interactions. Technical coverage requires visibility into the tools used by adversaries. This coverage can be obtained through programmatic malware emulation across the full spectrum of malware families deployed by cybercriminals, ensuring comprehensive insights into their activities in a timely and ongoing manner. ... Adversary Intelligence is produced from a focused collection, analysis and exploitation capability and curated from where threat actors collaborate, communicate and plan cyber attacks. Obtaining and utilizing this Intelligence provides proactive and groundbreaking insights into the methodology of top-tier cybercriminals – target selection, assets and tools used, associates and other enablers that support them.


Large language overkill: How SLMs can beat their bigger, resource-intensive cousins

LLMs are incredibly powerful, yet they are also known for sometimes “losing the plot,” or offering outputs that veer off course due to their generalist training and massive data sets. That tendency is made more problematic by the fact that OpenAI’s ChatGPT and other LLMs are essentially “black boxes” that don’t reveal how they arrive at an answer. This black box problem is going to become a bigger issue going forward, particularly for companies and business-critical applications where accuracy, consistency and compliance are paramount. ... Fortunately, SLMs are better suited to address many of the limitations of LLMs. Rather than being designed for general-purpose tasks, SLMs are developed with a narrower focus and trained on domain-specific data. This specificity allows them to handle nuanced language requirements in areas where precision is paramount. Rather than relying on vast, heterogeneous datasets, SLMs are trained on targeted information, giving them the contextual intelligence to deliver more consistent, predictable and relevant responses. This offers several advantages. First, they are more explainable, making it easier to understand the source and rationale behind their outputs. This is critical in regulated industries where decisions need to be traced back to a source.


Beware Of Shadow AI – Shadow IT’s Less Well-Known Brother

Even though AI brings great productivity, Shadow AI introduces different risks ... Studies show employees are frequently sharing legal documents, HR data, source code, financial statements and other sensitive information with public AI applications. AI tools can inadvertently expose this sensitive data to the public, leading to data breaches, reputational damage and privacy concerns. ... Feeding data into public platforms means that organizations have very little control over how their data is managed, stored or shared, with little knowledge of who has access to this data and how it will be used in the future. This can result in non-compliance with industry and privacy regulations, potentially leading to fines and legal complications. ... Third-party AI tools could have built-in vulnerabilities that a threat actor could exploit to gain access to the network. These tools can lack security standards compared to an organization’s internal security systems. Shadow AI can also introduce new attack vectors making it easier for malicious actors to exploit weaknesses. ... Without proper governance or oversight, AI models can spit out biased, incomplete or flawed outputs. Such biased and inaccurate results can bring harm to organizations. 



Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel

Daily Tech Digest - December 22, 2024

3 Steps To Include AI In Your Future Strategic Plans

AI is complex and multifaceted, so adopting it is not as simple as replacing legacy systems with new technology. Leaders would need to dig deeper to uncover barriers and opportunities. This can involve inviting external experts to discuss AI's benefits and challenges, hosting workshops where team members can explore different case studies, or creating internal discussion groups focused on various aspects of AI technology and potential barriers to adoption. ... A strong strategic plan should clearly link prospective investments to the organization's purpose and mission. For example, if customer centricity is central to the mission, any investment in new technology should directly connect to improving customer outcomes. ... A strategy plan should not only outline planned AI initiatives but also provide a clear roadmap for implementation. Given that AI is still evolving, it's crucial not to create a roadmap in isolation from ever-changing business challenges, market dynamics, or technological advancements. ... In this context, an AI strategy roadmap should be emergent— meaning it should be grounded in key strategic intentions while also being flexible enough to adapt to unforeseen events or black swan occurrences that necessitate rethinking and adjustments.


Can Pure Scrum Actually Work?

“Pure Scrum,” described in the Scrum Guide, is an idiosyncratic framework that helps create customer value in a complex environment. However, five main issues are challenging its general corporate application:Pure Scrum focuses on delivery: How can we avoid running in the wrong direction by building things that do not solve our customers’ problems? Pure Scrum ignores product discovery in particular and product management in general. If you think of the Double Diamond, to use a popular picture, Scrum is focused on the right side; see above. Pure Scrum is designed around one team focused on supporting one product or service. Pure Scrum does not address portfolio management. It is not designed to align and manage multiple product initiatives or projects to achieve strategic business objectives. Pure Scrum is based on far-reaching team autonomy: The Product Owner decides what to build, the Developers decide how to build it, and the Scrum team self-manages. ... At its core, pure Scrum is less a project management framework and more a reflection of an organization’s fundamental approach to creating value. It requires a profound shift from seeing work as a series of prescribed steps to viewing it as a continuous journey of discovery and adaptation. 


The Rise of Agentic AI: How Hyper-Automation is Reshaping Cybersecurity and the Workforce

As AI advances, concerns about job displacement grow louder. For years, organizations have reassured employees that AI will “enhance, not replace” human roles. Smith offered a more nuanced perspective: “AI will replace tasks, not people—at least in the near term. Human oversight remains critical because we still don’t fully understand AI behavior.” In cybersecurity, AI acts as a force multiplier, streamlining tedious tasks like data analysis and incident documentation while enabling humans to focus on strategic decisions. This collaboration allows professionals to do more with less, amplifying productivity without eliminating the need for human expertise. However, Smith acknowledged long-term challenges. ... The rise of agentic AI marks a transformative moment for cybersecurity and the workforce. As organizations move beyond static workflows and embrace dynamic, autonomous systems, they gain the ability to respond to threats faster and more efficiently than ever before. However, this evolution demands a strategic approach—one that balances automation with human oversight, strengthens defenses against AI-driven attacks, and prepares for the societal shifts AI will bring.


If ChatGPT produces AI-generated code for your app, who does it really belong to?

From a contractual point of view, Santalesa contends that most companies producing AI-generated code will, "as with all of their other IP, deem their provided materials -- including AI-generated code -- as their property." OpenAI (the company behind ChatGPT) does not claim ownership of generated content. According to their terms of service, "OpenAI hereby assigns to you all its right, title, and interest in and to Output." Clearly, though, if you're creating an application that uses code written by an AI, you'll need to carefully investigate who owns (or claims to own) what. For a view of code ownership outside the US, ZDNET turned to Robert Piasentin, a Vancouver-based partner in the Technology Group at McMillan LLP, a Canadian business law firm. He says that ownership, as it pertains to AI-generated works, is still an "unsettled area of the law." ... Piasenten says there may already be some UK case law precedent, based not on AI but on video game litigation. A case before the High Court (roughly analogous to the US Supreme Court) determined that images produced in a video game were the property of the game developer, not the player -- even though the player manipulated the game to produce a unique arrangement of game assets on the screen.


Supply Chain Risk Mitigation Must Be a Priority in 2025

Implementing impactful supply chain protections is far easier said than accomplished, due to the complexity, scale, and integration of modern supply chain ecosystems. While there isn't a silver bullet for eradicating threats entirely, prioritizing a targeted focus on effective supply chain risk management principles in 2025 is a critical place to start. It will require an optimal balance of rigorous supplier validation, purposeful data exposure, and meticulous preparation. ... As supply chain attacks accelerate, organizations must operate under the assumption that a breach isn't just possible — it's probable. An "assumption of breach" mindset shift will help drive more meticulous approaches to preparation via comprehensive supply chain incident response and risk mitigation. Preparation measures should begin with developing and regularly updating agile incident response processes that specifically cater to third-party and supply chain risks. For effectiveness, these processes will need to be well-documented and frequently practiced through realistic simulations and tabletop exercises. Such drills help identify potential gaps in the response strategy and ensure that all team members understand their roles and responsibilities during a crisis.


The End of Bureaucracy — How Leadership Must Evolve in the Age of Artificial Intelligence

AI doesn't just optimize — it transforms. It flattens hierarchies, demands transparency and dismantles traditional power structures. For those managers who thrive on gatekeeping, AI represents a fundamental threat, eliminating barriers they've spent careers building. Consider this: AI thrives on efficiency, speed and clarity. Tasks that once consumed hours of human effort — like vetting vendor contracts or managing customer service inquiries — are now handled instantly by AI systems. Employees can experiment with bold ideas without wading through endless committee approvals. But the true power of AI lies in decentralizing decision-making. By analyzing vast datasets, AI equips frontline employees with actionable insights that previously required executive oversight. This creates organizations that are faster, more agile and less dependent on gatekeepers. ... In an AI-first world, hierarchies will begin to collapse as real-time data eliminates the need for multiple layers of oversight, enabling faster and more efficient decision-making. At the same time, workflows will be reimagined as leaders take on the critical task of redesigning processes to seamlessly integrate AI, ensuring organizations can adapt quickly and effectively.


GAO report says DHS, other agencies need to up their game in AI risk assessment

The GAO said it is “recommending that DHS act quickly to update its guidance and template for AI risk assessments to address the remaining gaps identified in this report.” DHS, in turn, it said, “agreed with our recommendation and stated it plans to provide agencies with additional guidance that addresses gaps in the report including identifying potential risks and evaluating the level of risk.” ... AI, he said, “is being pushed out to businesses and consumers by organizations that profit from doing so, and assessing and addressing the potential harm it may cause has until recently been an afterthought. We are now seeing more focus on these potential negative effects, but efforts to contain them, let alone prevent them, will always be far behind the steamroller of new innovations in the AI realm.” Thomas Randall, research lead at Info-Tech Research Group, said, “it is interesting that the DHS had no assessments that evaluated the level of risk for AI use and implementation, but had largely identified mitigation strategies. What this may mean is the DHS is taking a precautionary approach in the time it was given to complete this assessment.” Some risks, he said, “may be identified as significant enough to warrant mitigation regardless of precise quantification of that risk. 


How CI/CD Helps Minimize Technical Debt in Software Projects

One of the foundational principles of CI/CD is the enforcement of automated testing. Automated tests, such as unit tests, integration tests, and end-to-end tests, ensure that code changes do not break existing functionality. By integrating testing into the CI pipeline, developers are alerted to issues immediately after they commit code. ... CI/CD pipelines facilitate incremental and iterative development by encouraging small, frequent code commits. Large, monolithic changes often introduce complexity and technical debt because they are harder to test, debug, and review effectively. ... Technical debt often arises from manual processes that are error-prone and time-consuming. CI/CD eliminates many of these inefficiencies by automating repetitive tasks, such as building, testing, and deploying applications. Automation ensures that these steps are performed consistently and accurately, reducing the risk of human error. ... Code reviews are a critical component of maintaining high-quality software. CI/CD tools enhance the code review process by providing automated feedback on every commit. This feedback loop fosters a culture of accountability and continuous improvement among developers.


Cost-conscious repatriation strategies

First, this is not a pushback on cloud technology as a concept; cloud works and has worked for the past 15 years. This repatriation trend highlights concerns about the unexpectedly high costs of cloud services, especially when enterprises feel they were promised lowered IT expenses during the earlier “cloud-only” revolutions. Leaders must adopt a more strategic perspective on their cloud architecture. It’s no longer just about lifting and shifting workloads into the cloud; it’s about effectively tailoring applications to leverage cloud-native capabilities—a lesson GEICO learned too late. A holistic approach to data management and technology strategies that aligns with an organization’s unique needs is the path to success and lower bills. Organizations are now exploring hybrid environments that blend public cloud capabilities with private infrastructure. A dual approach, which is nothing new, allows for greater data control, reduced storage and processing costs, and improved service reliability. Weekly noted that there are ways to manage capital expenditures in an operational expense model through on-premises solutions. On-prem systems tend to be more predictable and cost-effective over time.


Cyber Resilience: Adapting to Threats in the Cloud Era

Use cloud-native security solutions that offer automated threat detection, incident response, and monitoring. These technologies ought to be flexible enough to adjust to changes in the cloud environment and defend against new risks as they arise. ... Effective cyber resilience plans enable businesses to recover quickly from emergencies by reducing downtime and maintaining continuous service delivery. Businesses that put flexibility first can manage emergencies with few problems, which helps them keep the confidence and trust of their clients. Cyber resilience strongly emphasizes flexibility, enabling companies to address new risks in the ever-evolving digital environment. Businesses can lower financial losses and safeguard their reputation by concentrating on data protection and breach remediation. Finding and fixing common setup mistakes in cloud systems that could lead to security issues and data breaches requires using Cloud Security Posture Management (CSPM) tools. ... Because criminals frequently use these configuration errors to cause data breaches and security errors, it is essential to identify them. Organizations may monitor their cloud environments and ensure that settings follow security best practices and regulations by using CSPM solutions. 



Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett

Daily Tech Digest - December 21, 2024

The New Paradigm – The Rise of the Virtual Architect

We’re on the brink of a new paradigm in Enterprise Architecture—one where architects will have unprecedented access to knowledge, insights, and tools through what I call the Virtual Architect. The Virtual Architect isn’t limited to financial services. I’ve seen interest across industries like insurance and telecoms, where clients are eager to deploy such solutions. Why? Because it promises to provide accurate, real-time information, support colleagues, and even generate designs. Yes, you read that right—design generation is on the table. Naturally, this raises a big question: does this mean architects will be replaced? We’ll get to that in a moment. ... But here’s the catch: how do we ensure the designs generated by a Virtual Architect are accurate? The old saying applies—it’s only as good as the quality of the data and designs you feed in. That is where ongoing training and validation from architects remain crucial. So, will the Virtual Architect replace human architects? I don’t believe so, not in the near future. Designing systems is just one aspect of an architect’s role. Stakeholder engagement, strategic thinking, and soft skills are equally important—and these are areas where AI still falls short. For now, the Virtual Architect is an enhancement, not a replacement. 


IT/OT convergence propels zero-trust security efforts

Companies want flexibility in how end users and business applications access and interact with OT systems. ... Enterprises also want to extract data from OT systems, which requires network connectivity. For example, manufacturers can pull real-time data from their assembly lines so that specialized analytics applications can identify opportunities for efficiency and predict disruptions to production. While converging OT onto IT networks can drive innovation, it exposes OT systems to the threats that proliferate the digital world. Companies often need new security solutions to protect OT. EMA’s latest research report, “Zero Trust Networking: How Network Teams Support Cybersecurity,” revealed that IT/OT convergence drives 38% of enterprise zero-trust security strategies. ... IT/OT convergence leads enterprises to set different priorities for zero-trust solution requirements. When modernizing secure remote access solutions for zero trust, OT-focused companies have a stronger need for granular policy management capabilities. These companies are more likely to have a secure remote access solution that can cut off network access in response to anomalous behavior or changes in the state of a device. When implementing zero-trust network segmentation, OT-focused companies are more likely to seek a solution with dynamic and adaptive segmentation controls. 


Why Enterprises Still Grapple With Data Governance

“Even in highly regulated industries where the acceptance and understanding of the concept and value of governance more broadly are ingrained into the corporate culture, most data governance programs have progressed very little past an expensive [check] boxing exercise, one that has kept regulatory queries to a minimum but returned very little additional business value on the investment,” says Willis in an email interview. ... Why the disconnect? Data teams don’t feel they can spend time understanding stakeholders or even challenging business stakeholder needs. Though executive support is critical, data governance professionals are not making the most out of that support. One often unacknowledged problem is culture. “Unfortunately, in many organizations, the predominant attitude towards governance and risk management is that [they are] a burden of bureaucracy that slows innovation,” says Willis. “Data governance teams too frequently perpetuate that mindset, over-rotating on data controls and processes where the effort to execute is misaligned to the value they release.” One way to begin improving the effectiveness of data governance is to reassess the organization’s objectives and approach.


What Is Next-Generation Data Protection and Why Should Enterprise Tech Buyers Care?

Next-generation data protection was created to combat today’s most sophisticated and dangerous cyberattacks. It expands the purview of what is protected and how it is protected within an enterprise data infrastructure. This new approach also adds preemptive and predictive capabilities that help mitigate the effects of massive cyberattacks. Moreover, next-generation data protection is the last line of defense against the most vicious, unscrupulous cyber criminals who want nothing more than to take down and harm large companies, either for monetary gain or respect amongst fellow criminals. Therefore, understanding and implementing next-generation data protection is vital. ... To make data protection highly effective today for the datasets that seem most critical, it has to be highly integrated and orchestrated. You don’t want a manual process making a weak spot for your organization. To resolve this issue, one of the breakthrough capabilities of next-generation data protection is automated cyber protection. Automated cyber protection seamlessly integrates cyber storage resilience into a cyber security operation center (SOC) and data center-wide cyber security applications, such as SIEM and SOAR cyber applications. 


Federal Cyber Operations Would Downgrade Under Shutdown

The pending shutdown could trigger major cutbacks to critical technology services across the federal government, including DHS's Science and Technology Directorate, which provides technical expertise to address emerging threats impacting DHS, first responders and private sector organizations. During a lapse in appropriations, just 31 of its staff members would be retained, representing a staggering 94% reduction in its workforce. The shutdown could lead to longer airport lines, furloughs for hundreds of thousands of federal workers. Brian Fox, CTO of software supply chain management firm Sonatype, previously told Information Security Media Group that CISA plays a critical role in safeguarding government infrastructure during periods of political turbulence. "It's no secret that times of uncertainty, change and disruption are prime opportunities for threat actors to increase efforts to infiltrate systems," Fox said. The shutdown is set to begin at 12:01 a.m. on Saturday, December 21, unless lawmakers can pass a short-term spending bill, after the House rejected a compromise package Thursday night following online remarks from President-elect Donald Trump and his billionaire government efficiency advisor, Elon Musk.


Why cybersecurity is critical to energy modernization

Connected infrastructures for renewables, in many cases, are operated by new companies or even residential users. They don’t have a background in managing reliability and, generally, have very limited or no cybersecurity expertise. Despite this, they all oversee internet-connected systems that are digitally controlled and therefore vulnerable to hacking. The cumulated power controlled by many connected parties also poses a risk of blackouts. The concern is about the suppliers, especially for consumer equipment, as it is not possible to impose security regulations on consumers. The Cyber Resilience Act tries to address suppliers but is likely not sufficient. ... International collaboration is crucial in addressing the cybersecurity risks posed by interconnected energy grids. By sharing knowledge, harmonizing standards, and coordinating joint incident response efforts, countries can collectively enhance their preparedness and resilience. There are various formal international collaborations, such as ENTSO-E and the DSO Entity SEEG, coordination groups like WG8 in NIS, and partnerships between experts and authorities in groups like NCCS. International exercises led by organizations like ENISA and NATO further support these initiatives.


US Ban on TP-Link Routers More About Politics Than Exploitation Risk

While no researcher has called out a specific backdoor or zero-day vulnerability in TP-Link routers, restricting products from a country that is a political and economic rival is not unreasonable, says Thomas Pace, CEO of extended Internet of Things (IoT) security firm NetRise and a former head of cybersecurity for the US Department of Energy. ... Companies and consumers should do their due diligence, keep their devices up to date with the latest security patches, and consider whether the manufacturer of their critical hardware may have secondary motives, says Phosphorus Cybersecurity's Shankar. "The vast majority of successful attacks on IoT are enabled by preventable issues like static, unchanged default passwords, or unpatched firmware, leaving systems exposed," he says. "For business operators and consumer end-users, the key takeaway is clear: adopting basic security hygiene is a critical defense against both opportunistic and sophisticated attacks. Don’t leave the front door open." For companies worried about the origin of their networking devices or the security their supply chain, finding a trusted third party to manage the devices is a reasonable option. In reality, though, almost every device should be monitored and not trusted, says NetRise's Pace.


The Next Big Thing: How Generative AI Is Reshaping DevOps in the Cloud

One of the biggest impacts of AI on DevOps is in Continuous Integration and Continuous Delivery (CI/CD) pipelines. These pipelines help automate how code changes are managed and deployed to production environments. Automation in this area makes operations more efficient. However, as codebases grow and get more complex, these pipelines often need manual tuning and adjustments to run smoothly. AI impacts this by making pipelines smarter. It can analyze historical data, like build times, test results, and deployment patterns. By doing this, it can adjust how pipelines are set up to minimize bottlenecks and use resources better. For example, AI can decide which tests to run first. It chooses tests that are more likely to find bugs from code changes. This helps to speed up the process of testing and deploying code. ... Security has always been very important for cloud-native apps and DevOps teams. With Generative AI, we can now move from reactive to proactive when it comes to system vulnerabilities. Instead of just waiting for security issues to appear, AI helps DevOps teams spot and prevent potential risks ahead of time. AI-powered security tools can perform data analysis on a company’s cloud system. 


US order is a reminder that cloud platforms aren’t secure out of the box

Affected IT departments are ordered to implement a set of baseline configurations set out by the Secure Cloud Business Applications (SCuBA) project for certain software as a service (SaaS) platforms. So far, the directive notes, the only final configuration baseline set is for Microsoft 365. There is also a baseline configuration for Google Workspace listed on the SCuBA website that isn’t mentioned in this week’s directive. However, the order does say that in the future, CISA may release additional SCuBA Secure Configuration Baselines for other cloud products. When the baselines are issued, they will also will fall under the scope of this week’s directive. ... Coincidentally, the CISA directive comes the same week as CSO reported that Amazon has halted its deployment of M365 for a full year, as Microsoft tries to fix a long list of security problems that Amazon identified. A CISA spokesperson said he couldn’t comment on why the directive was issued this week, but Dubrovsky believes it’s “more of a generic warning” to federal departments, and not linked to an event. Asked how private-sector CISOs should secure cloud platforms, Dubrovsky said they should start with cybersecurity basics. That includes implementing tough identity and access management policies, including MFA, and performing network monitoring and alerting for abnormalities, before going into the cloud.


The value of generosity in leadership

For the first time we have five generations in the workforce, which means that needs, priorities, and sources of meaning vary. Generosity becomes much more important because you cannot achieve everything by yourself. You can only do that by empowering others and giving them the tools, opportunities, and trust they need to succeed. And then, hopefully, they can together fulfill the organization’s purpose, objectives, and dreams. ... The opposite of a generous leader is a narcissistic leader, who is focused on themselves. Narcissistic leaders are not as effective as leaders who have higher EQs [emotional quotients], who are more generous and recognize that the team’s performance is a result of something beyond themselves. But for one reason or another, narcissistic leaders continue to rise to the top. ... That link between being generous with yourself and being generous with others is so important. When I’ve seen leaders really unlock a new level of leadership, and generosity in leadership, it comes from first and foremost understanding how to lead themselves, and specifically, how to control the amygdala hijack that can send you below the line. Those are very real physiological tendencies that can create what appears to be a zero-sum context based on winning and losing. 



Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman

Daily Tech Digest - December 20, 2024

The Top 25 Security Predictions for 2025

“Malicious actors will go full throttle in mining the potential of AI in making cyber crime easier, faster and deadlier. But this emerging and ever-evolving technology can also be made to work for enterprise security and protection by harnessing it for threat intelligence, asset profile management, attack path prediction and remediation guidance. As SOCs catch up to secure innovations still and yet unraveling, protecting enterprises from tried and tested modes of attack remains essential. While innovation makes for novel ways to strike, criminals will still utilize what is easy and what has worked for them for years.” ... Organizations are urged to embrace scalable, cloud-native security information and event management (SIEM) solutions. These tools improve threat detection and response by integrating logs from cloud and endpoint systems and automating incident management with security orchestration, automation, and response (SOAR) features. ... While targets like edge devices will continue to capture the attention of threat actors, there’s another part of the attack surface that defenders must pay close attention to over the next few years: their cloud environments. Although cloud isn’t new, it’s increasingly piquing the interest of cyber criminals. 


Why AI language models choke on too much text

Although RNNs have fallen out of favor since the invention of the transformer, people have continued trying to develop RNNs suitable for training on modern GPUs. In April, Google announced a new model called Infini-attention. It’s kind of a hybrid between a transformer and an RNN. Infini-attention handles recent tokens like a normal transformer, remembering them and recalling them using an attention mechanism. However, Infini-attention doesn’t try to remember every token in a model’s context. Instead, it stores older tokens in a “compressive memory” that works something like the hidden state of an RNN. This data structure can perfectly store and recall a few tokens, but as the number of tokens grows, its recall becomes lossier. ... Transformers are good at information recall because they “remember” every token of their context—this is also why they become less efficient as the context grows. In contrast, Mamba tries to compress the context into a fixed-size state, which necessarily means discarding some information from long contexts. The Nvidia team found they got the best performance from a hybrid architecture that interleaved 24 Mamba layers with four attention layers. This worked better than either a pure transformer model or a pure Mamba model.


The End of ‘Apps,’ Brought to You by AI?

Achieving the dream of a unified customer experience is possible, not by building a bigger app but by AI super agents. Much of the groundwork has already been done: AI language models like Claude and GPT-4 are already designed to support many use cases, and Agentic AI takes that concept further. OpenAI, Google, Amazon, and Meta are all making general-purpose agents that can be used by anyone for any purpose. In theory, we might eventually see a vast network of specialized AI agents running in integration with each other. These could even serve customers’ needs within the familiar interfaces they already use. Crucially, personalization is the big selling point. It’s the reason AI super agents may succeed where super apps failed in the West. A super agent wouldn’t just aggregate services or fetch a gadget’s price when prompted. It would compare prices across frequented platforms, apply discounts, or suggest competing gadgets based on reviews you’ve left for previous models. ... This new ‘super agents’ reality would yield significant benefits for developers, too, possibly even redefining what it means to be a developer. While lots of startups invent good ideas daily, the reality of the software business is that you’re always limited by the number of developers available. 


A Starter’s Framework for an Automation Center of Excellence

An automation CoE is focused on breaking down enterprise silos and promoting automation as a strategic investment imperative for achieving long-term value. It helps to ensure that when teams want to create new initiatives, they don’t duplicate previous efforts. There are various cost, efficiency and agility benefits to setting up such an entity in the enterprise. ... Focus on projects that deliver maximum impact with minimal effort. Use a clear, repeatable process to assess ROI — think about time saved, revenue gained and risks reduced versus the effort and complexity required. A simple question to ask is, “Is this process ready for automation, and do we have the right tools to make it work?” ... Your CoE needs a solid foundation. Select tools and systems that integrate seamlessly with your organization’s architecture. It might seem challenging at first, but the long-term cultural and technical benefits are worth it. Ensure your technology supports scalability as automation efforts grow. ... Standardize automation without stifling team autonomy. Striking this balance is key. Consider appointing both a business leader and a technical evangelist to champion the initiative and drive adoption across the organization. Clear ownership and guidelines will keep teams aligned while fostering innovation.


What is data architecture? A framework to manage data

The goal of data architecture is to translate business needs into data and system requirements, and to manage data and its flow through the enterprise. Many organizations today are looking to modernize their data architecture as a foundation to fully leverage AI and enable digital transformation. Consulting firm McKinsey Digital notes that many organizations fall short of their digital and AI transformation goals due to process complexity rather than technical complexity. ... While both data architecture and data modeling seek to bridge the gap between business goals and technology, data architecture is about the macro view that seeks to understand and support the relationships between an organization’s functions, technology, and data types. Data modeling takes a more focused view of specific systems or business cases. ... Modern data architectures must be scalable to handle growing data volumes without compromising performance. A scalable data architecture should be able to scale up and to scale out. ... ... Modern data architectures must ensure data remains accurate, consistent, and unaltered through its lifecycle to preserve its reliability for analysis and decision-making. They must prevent issues like data corruption, duplication, or loss.


Cybersecurity At the Crossroads: The Role Of Private Companies In Safeguarding U.S. Critical Infrastructure

Regulation alone is not a solution, but it does establish baseline security standards and provide much-needed funding to support defenses. Standards have come a long way and are relatively mature. Though there is still a tremendous amount of gray area, and a lack of relevance or attainability for certain industries and smaller organizations. The federal government must prioritize injecting funds into cybersecurity initiatives, ensuring that even the smallest entities managing critical infrastructure can implement strong security measures. With this funding, we must build a strong defense posture and cyber resiliency within these private sector organizations. This involves more than deploying advanced tools; it requires developing skilled personnel capable of responding to incidents and defending against attacks. Upskilling programs should focus on blue teaming and incident response, ensuring that organizations have the expertise to manage their security proactively.A critical component of effective cybersecurity is understanding and applying the standard risk formula: Risk = Threat x Vulnerability x Consequence. This formula emphasizes that risk is determined by evaluating the likelihood of an attack (Threat), the weaknesses in defenses (Vulnerability), and the potential impact of a breach (Consequence). 


Achieving Network TCO

TCO discussion should shift from a unilateral cost justification (and payback) of technology that is being proposed to a discussion of what the opportunity costs for the business will be if a network infrastructure investment is canceled or delayed. If a company determines strategically to decentralize manufacturing and distribution but is also wary of adding headcount, it's going to seek out edge computing and network automation. It’s also likely to want robust security at its remote sites, which means investments in zero-trust networks and observability software that can assure that the same level of enterprise security is being applied at remote sites as it is at central headquarters. In cases like this, it shouldn’t be the network manager or even the CIO who is solely responsible for making the budget case for network investments. Instead, the network technology investments should be packaged together in the total remote business recommendation and investment that other C-level executives argue for with the CIO and/or network manager, HR, and others. In this scenario, the TCO of a network technology investment is weighed against the cost of not doing it at all and missing a corporate opportunity to decentralize operations, which can’t be accomplished without the technology that is needed to run it.


The coming hardware revolution: How to address AI’s insatiable demands

The US forecast for energy consumption on AI is alarming. Today’s AI queries require roughly 10x the electricity of traditional Google queries - a ChatGPT request runs 10x watt-hours versus a Google request. A typical CPU in a data center uses approximately 300 watts per hour (Electric Power Research Institute), while a Nvidia H100 GPU uses up to 700 watts per hour, a similar usage of an average household in the US per month. Advancements in AI model capabilities, and greater use of parameters, continue to drive energy consumption higher. Much of this demand is centralized in data centers as companies like Amazon, Microsoft, Google, and Meta build more and more massive hyperscale facilities all over the country. US data center electricity consumption is projected to grow 125 percent by 2030, using nine percent of all national electricity. ... While big tech companies certainly have the benefit of incumbency and funding advantage, the startup ecosystem will play an absolutely crucial role in driving the innovation necessary to enable the future of AI. Large public tech companies often have difficulty innovating at the same speed as smaller, more nimble startups.


Agents are the 'third wave' of the AI revolution

"Agentic AI will be the next wave of unlocked value at scale," Sesh Iyer, managing director and senior partner with BCG X, Boston Consulting Group's tech build and design unit, told ZDNET. ... As with both analytical and gen AI, AI agents need to be built with and run along clear ethical and operational guidelines. This includes testing to minimize errors and a governance structure. As is the case with all AI instances, due diligence to ensure compliance and fairness is also a necessity for agents, Iyer said. As is also the case with broader AI, the right skills are needed to design, build and manage AI agents, he continued. Such talent is likely already available within many organizations, with the domain knowledge needed, he added. "Upskill your workforce to manage and use agentic AI effectively. Developing internal expertise will be key to capturing long-term value from these systems." ... To prepare for the shift from gen AI to agentic AI, "start small and scale strategically," he advises. "Identify a few high-impact use cases -- such as customer service -- and run pilot programs to test and refine agent capabilities. Alongside these use cases, understand the emerging platforms and software components that offer support for agentic AI."


Having it both ways – bringing the cloud to on-premises data storage

“StaaS is an increasingly popular choice for organisations, with demand only likely to grow soon. The simple reason for this is two-fold: it provides both convenience and simplicity,” said Anthony Cusimano, Director of Technical Marketing at Object First, a supplier of immutable backup storage appliances. There is more than one flavour of on-premises StaaS, as was pointed out by A3 Communications panel member Camberley Bates, Chief Technology Advisor at IT research and advisory firm The Futurum Group. Bates pointed out that the two general categories of on-premises StaaS service are Managed and Non-Managed StaaS. Managed StaaS sees vendors handling the whole storage stack, by both implementing and then fully managing storage systems on customers’ premises. However, Bates said enterprises are more attracted to Non-Managed StaaS. ... “Non-managed StaaS has become surprisingly of interest in the market. This is because enterprises buy it ‘once’ and do not have to go back for a capex request over and over again. Rather, it becomes a monthly bill that they can true-up over time. We have found the fully managed offering of less interest, with enterprises opting to use their own resources to handle the storage management,” continued Bates.



Quote for the day:

“If you don’t try at anything, you can’t fail… it takes back bone to lead the life you want” -- Richard Yates

Daily Tech Digest - December 19, 2024

How AI-Empowered ‘Citizen Developers’ Help Drive Digital Transformation

To compete in the future, companies know they need more IT capabilities, and the current supply chain has failed to provide the necessary resources. The only way for companies to fill the void is through greater emphasis on the skill development of their existing staff — their citizens. Imagine two different organizations. Both have explicit initiatives underway to digitally transform their businesses. In one, the IT organization tries to carry the load by itself. There, the mandate to digitize has only created more demand for new applications, automations, and data analyses — but no new supply. Department leaders and digitally oriented professionals initially submitted request after request, but as the backlog grew, they became discouraged and stopped bothering to ask when their solutions would be forthcoming. After a couple of years, no one even mentioned digital transformation anymore. In the other organization, digital transformation was a broad organizational mandate. IT was certainly a part of it and had to update a variety of enterprise transaction systems as well as moving most systems to the cloud. They had their hands full with this aspect of the transformation. Fortunately, in this hypothetical company, many citizens were engaged in the transformation process as well. 


Things CIOs and CTOs Need To Do Differently in 2025

“Because the nature of the threat that organizations face is increasing all the time, the tooling that’s capable of mitigating those threats becomes more and more expensive,” says Logan. “Add to that the constantly changing privacy security rules around the globe and it becomes a real challenge to navigate effectively.” Also realize that everyone in the organization is on the same team, so problems should be solved as a team. IT leadership is in a unique position to help break down the silos between different stakeholder groups. ... CIOs and CTOs face several risks as they attempt to manage technology, privacy, ROI, security, talent and technology integration. According to Joe Batista, chief creatologist, former Dell Technologies & Hewlett Packard Enterprise executive, senior IT leaders and their teams should focus on improving the conditions and skills needed to address such challenges in 2025 so they can continue to innovate. “Keep collaborating across the enterprise with other business leaders and peers. Take it a step further by exploring how ecosystems can impact your business agenda,” says Batista. “Foster an environment that encourages taking on greater risks. The key is creating a space where innovation can thrive, and failures are steppingstones to success.”


5 reasons why 2025 will be the year of OpenTelemetry

OTel was initially targeted at cloud-native applications, but with the creation of a special interest group within OpenTelemetry focused on the continuous integration and continuous delivery (CI/CD) application development pipeline, OTel becomes a more powerful, end-to-end tool. “CI/CD observability is essential for ensuring that software is released to production efficiently and reliably,” according to project lead Dotan Horovits. “By integrating observability into CI/CD workflows, teams can monitor the health and performance of their pipelines in real-time, gaining insights into bottlenecks and areas that require improvement.” He adds that open standards are critical because they “create a common uniform language which is tool- and vendor-agnostic, enabling cohesive observability across different tools and allowing teams to maintain a clear and comprehensive view of their CI/CD pipeline performance.” ... The explosion of interest in AI, genAI, and large language models (LLM) is creating an explosion in the volume of data that is generated, processed and transmitted across enterprise networks. That means a commensurate increase in the volume of telemetry data that needs to be collected in order to make sure AI systems are operating efficiently.


The Importance of Empowering CFOs Against Cyber Threats

Today's CFOs must be collaborative leaders, willing to embrace an expanding role that includes protecting critical assets and securing the bottom line. To do this, CFOs must work closely with chief information security officers (CISOs), due to the sophistication and financial impact of cyberattacks. ... CFOs are uniquely positioned to understand the potential financial devastation from cyber incidents. The costs associated with a breach extend beyond immediate financial losses, encompassing longer-term repercussions, such as reputational damage, legal liabilities, and regulatory fines. CFOs must measure and consider these potential financial impacts when participating in incident response planning. ... The regulatory landscape for CFOs has evolved significantly beyond Sarbanes-Oxley. The Securities and Exchange Commission's (SEC's) rules on cybersecurity risk management, strategy, governance, and incident disclosure have become a primary concern for CFOs and reflect the growing recognition of cybersecurity as a critical financial and operational risk. ... Adding to the complexity, the CFO is now a cross-functional collaborator who must work closely with IT, legal, and other departments to prioritize cyber initiatives and investments. 


Community Banks Face Perfect Storm of Cybersecurity, Regulatory and Funding Pressures

Cybersecurity risks continue to cast a long shadow over technological advancement. About 42% of bankers expect cybersecurity risks to pose their most difficult challenge in implementing new technologies over the next five years. This concern is driving many institutions to take a cautious approach to emerging technologies like artificial intelligence. ... Banks express varying levels of satisfaction with their technology services. Asset liability management and interest rate risk technologies receive the highest satisfaction ratings, with 87% and 84% of respondents respectively reporting being “extremely” or “somewhat” satisfied. However, workflow processing and core service provider services show room for improvement, with less than 70% of banks expressing satisfaction with these areas. ... Compliance costs continue to consume a significant portion of bank resources. Legal and accounting/auditing expenses related to compliance saw notable increases, with both categories rising nearly 4 percentage points as a share of total expenses. The implementation of the current expected credit loss (CECL) accounting standard has contributed to these rising costs.


Dark Data Explained

Dark data often lies dormant and untapped, its value obscured by poor quality and disorganization. Yet within these neglected reservoirs of information lies the potential for significant insights and improved decision-making. To unlock this potential, data cleaning and optimization become vital. Cleaning dark data involves identifying and correcting inaccuracies, filling in missing entries, and eliminating redundancies. This initial step is crucial, as unclean data can lead to erroneous conclusions and misguided strategies. Optimization furthers the process by enhancing the usability and accessibility of the data. Techniques such as data transformation, normalization, and integration play pivotal roles in refining dark data. By transforming the data into standardized formats and ensuring it adheres to consistent structures, companies and researchers can more effectively analyze and interpret the information. Additionally, integration across different data sets and sources can uncover previously hidden patterns and relationships, offering a comprehensive view of the phenomenon being studied. By converting dark data through meticulous cleaning and sophisticated optimization, organizations can derive actionable insights and add substantial value. 


In potential reversal, European authorities say AI can indeed use personal data — without consent — for training

The European Data Protection Board (EDPB) issued a wide-ranging report on Wednesday exploring the many complexities and intricacies of modern AI model development. It said that it was open to potentially allowing personal data, without owner’s consent, to train models, as long as the finished application does not reveal any of that private information. This reflects the reality that training data does not necessarily translate into the information eventually delivered to end users. ... “Nowhere does the EDPB seem to look at whether something is actually personal data for the AI model provider. It always presumes that it is, and only looks at whether anonymization has taken place and is sufficient,” Craddock wrote. “If insufficient, the SA would be in a position to consider that the controller has failed to meet its accountability obligations under Article 5(2) GDPR.” And in a comment on LinkedIn that mostly supported the standards group’s efforts, Patrick Rankine, the CIO of UK AI vendor Aiphoria, said that IT leaders should stop complaining and up their AI game. “For AI developers, this means that claims of anonymity should be substantiated with evidence, including the implementation of technical and organizational measures to prevent re-identification,” he wrote, noting that he agrees 100% with this sentiment. 


Software Architecture and the Art of Experimentation

While we can’t avoid being wrong some of the time, we can reduce the cost of being wrong by running small experiments to test our assumptions and reverse wrong decisions before their costs compound. But here time is the enemy: there is never enough time to test every assumption and so knowing which ones to confront is the art in architecting. Successful architecting means experimenting to test decisions that affect the architecture of the system, i.e. those decisions that are "fatal" to the success of the thing you are building if you are wrong. ... If you don’t run an experiment you are assuming you already know the answer to some question. So long as that’s the case, or so long as the risk and cost of being wrong is small, you may not need to experiment. Some big questions, however, can only be answered by experimenting. Since you probably can’t run experiments for all the questions you have to answer, implicitly accepting the associated risk, so you need to make a trade-off between the number of experiments you can run and the risks you won’t be able to mitigate by experimenting. The challenge in creating experiments that test both the MVP and MVA is asking questions that challenge the business and technical assumptions of both stakeholders and developers. 


5 job negotiation tips for CAIOs

As you discuss base, bonus, and equity, be specific and find out exactly what their pay range actually is for this emerging role and how that compares with market rates for your location. For example, some recruiters may give you a higher number early on in discussions, and then once you’re well bought-in to the company after several interviews, the final offer may throttle things back. ... Set clear expectations early, and be prepared to withdraw your candidacy if any downward-revised amount later on is too far below your household needs. ... As a CAIO, you don’t want to be measured the same as the lines of business, or penalized if they fall short of quarterly or yearly sales targets. Ensure your performance metrics are appropriate for the role and the balance you’ll need to strike between near-term and longer-term objectives. For certain, AI should enable near-term productivity improvements and cost savings, but it should also enable longer-term revenue growth via new products and services, or enhancements to existing offerings. ... Companies sometimes place a clause in their legal agreement that states they own all pre-existing IP. Get that clause removed and itemize your pre-existing IP if needed to ensure it stays under your ownership. 


Leadership skills for managing cybersecurity during digital transformation

First, security must be top of mind as all new technologies are planned. As you innovate, ensure that security is built into deployments, and options chosen that match your business risk profile and organization’s values. For example, consider enabling the max security features that come with many IoT, such as forcing the change of default passwords, patching devices and ensuring vulnerabilities can be addressed. Likewise, ensure that AI applications are ethically sound, transparent, and do not introduce unintended biases. Second, a comprehensive risk assessment should be performed on the current network and systems environment as well as on the future planned “To Be” architecture. ... Digital transformation also demands leaders who are not only technically adept but also visionary in guiding their organizations through change. Leaders must be able to inspire a digital culture, align teams with new technologies, and drive strategic initiatives that leverage digital capabilities for competitive advantage. Finally, leaders must be life-long learners who constantly update their skills and forge strong relationships across their organzation for this new digitally-transformed environment.



Quote for the day:

"Don’t watch the clock; do what it does. Keep going." -- Sam Levenson