Daily Tech Digest - December 20, 2024

The Top 25 Security Predictions for 2025

“Malicious actors will go full throttle in mining the potential of AI in making cyber crime easier, faster and deadlier. But this emerging and ever-evolving technology can also be made to work for enterprise security and protection by harnessing it for threat intelligence, asset profile management, attack path prediction and remediation guidance. As SOCs catch up to secure innovations still and yet unraveling, protecting enterprises from tried and tested modes of attack remains essential. While innovation makes for novel ways to strike, criminals will still utilize what is easy and what has worked for them for years.” ... Organizations are urged to embrace scalable, cloud-native security information and event management (SIEM) solutions. These tools improve threat detection and response by integrating logs from cloud and endpoint systems and automating incident management with security orchestration, automation, and response (SOAR) features. ... While targets like edge devices will continue to capture the attention of threat actors, there’s another part of the attack surface that defenders must pay close attention to over the next few years: their cloud environments. Although cloud isn’t new, it’s increasingly piquing the interest of cyber criminals. 


Why AI language models choke on too much text

Although RNNs have fallen out of favor since the invention of the transformer, people have continued trying to develop RNNs suitable for training on modern GPUs. In April, Google announced a new model called Infini-attention. It’s kind of a hybrid between a transformer and an RNN. Infini-attention handles recent tokens like a normal transformer, remembering them and recalling them using an attention mechanism. However, Infini-attention doesn’t try to remember every token in a model’s context. Instead, it stores older tokens in a “compressive memory” that works something like the hidden state of an RNN. This data structure can perfectly store and recall a few tokens, but as the number of tokens grows, its recall becomes lossier. ... Transformers are good at information recall because they “remember” every token of their context—this is also why they become less efficient as the context grows. In contrast, Mamba tries to compress the context into a fixed-size state, which necessarily means discarding some information from long contexts. The Nvidia team found they got the best performance from a hybrid architecture that interleaved 24 Mamba layers with four attention layers. This worked better than either a pure transformer model or a pure Mamba model.


The End of ‘Apps,’ Brought to You by AI?

Achieving the dream of a unified customer experience is possible, not by building a bigger app but by AI super agents. Much of the groundwork has already been done: AI language models like Claude and GPT-4 are already designed to support many use cases, and Agentic AI takes that concept further. OpenAI, Google, Amazon, and Meta are all making general-purpose agents that can be used by anyone for any purpose. In theory, we might eventually see a vast network of specialized AI agents running in integration with each other. These could even serve customers’ needs within the familiar interfaces they already use. Crucially, personalization is the big selling point. It’s the reason AI super agents may succeed where super apps failed in the West. A super agent wouldn’t just aggregate services or fetch a gadget’s price when prompted. It would compare prices across frequented platforms, apply discounts, or suggest competing gadgets based on reviews you’ve left for previous models. ... This new ‘super agents’ reality would yield significant benefits for developers, too, possibly even redefining what it means to be a developer. While lots of startups invent good ideas daily, the reality of the software business is that you’re always limited by the number of developers available. 


A Starter’s Framework for an Automation Center of Excellence

An automation CoE is focused on breaking down enterprise silos and promoting automation as a strategic investment imperative for achieving long-term value. It helps to ensure that when teams want to create new initiatives, they don’t duplicate previous efforts. There are various cost, efficiency and agility benefits to setting up such an entity in the enterprise. ... Focus on projects that deliver maximum impact with minimal effort. Use a clear, repeatable process to assess ROI — think about time saved, revenue gained and risks reduced versus the effort and complexity required. A simple question to ask is, “Is this process ready for automation, and do we have the right tools to make it work?” ... Your CoE needs a solid foundation. Select tools and systems that integrate seamlessly with your organization’s architecture. It might seem challenging at first, but the long-term cultural and technical benefits are worth it. Ensure your technology supports scalability as automation efforts grow. ... Standardize automation without stifling team autonomy. Striking this balance is key. Consider appointing both a business leader and a technical evangelist to champion the initiative and drive adoption across the organization. Clear ownership and guidelines will keep teams aligned while fostering innovation.


What is data architecture? A framework to manage data

The goal of data architecture is to translate business needs into data and system requirements, and to manage data and its flow through the enterprise. Many organizations today are looking to modernize their data architecture as a foundation to fully leverage AI and enable digital transformation. Consulting firm McKinsey Digital notes that many organizations fall short of their digital and AI transformation goals due to process complexity rather than technical complexity. ... While both data architecture and data modeling seek to bridge the gap between business goals and technology, data architecture is about the macro view that seeks to understand and support the relationships between an organization’s functions, technology, and data types. Data modeling takes a more focused view of specific systems or business cases. ... Modern data architectures must be scalable to handle growing data volumes without compromising performance. A scalable data architecture should be able to scale up and to scale out. ... ... Modern data architectures must ensure data remains accurate, consistent, and unaltered through its lifecycle to preserve its reliability for analysis and decision-making. They must prevent issues like data corruption, duplication, or loss.


Cybersecurity At the Crossroads: The Role Of Private Companies In Safeguarding U.S. Critical Infrastructure

Regulation alone is not a solution, but it does establish baseline security standards and provide much-needed funding to support defenses. Standards have come a long way and are relatively mature. Though there is still a tremendous amount of gray area, and a lack of relevance or attainability for certain industries and smaller organizations. The federal government must prioritize injecting funds into cybersecurity initiatives, ensuring that even the smallest entities managing critical infrastructure can implement strong security measures. With this funding, we must build a strong defense posture and cyber resiliency within these private sector organizations. This involves more than deploying advanced tools; it requires developing skilled personnel capable of responding to incidents and defending against attacks. Upskilling programs should focus on blue teaming and incident response, ensuring that organizations have the expertise to manage their security proactively.A critical component of effective cybersecurity is understanding and applying the standard risk formula: Risk = Threat x Vulnerability x Consequence. This formula emphasizes that risk is determined by evaluating the likelihood of an attack (Threat), the weaknesses in defenses (Vulnerability), and the potential impact of a breach (Consequence). 


Achieving Network TCO

TCO discussion should shift from a unilateral cost justification (and payback) of technology that is being proposed to a discussion of what the opportunity costs for the business will be if a network infrastructure investment is canceled or delayed. If a company determines strategically to decentralize manufacturing and distribution but is also wary of adding headcount, it's going to seek out edge computing and network automation. It’s also likely to want robust security at its remote sites, which means investments in zero-trust networks and observability software that can assure that the same level of enterprise security is being applied at remote sites as it is at central headquarters. In cases like this, it shouldn’t be the network manager or even the CIO who is solely responsible for making the budget case for network investments. Instead, the network technology investments should be packaged together in the total remote business recommendation and investment that other C-level executives argue for with the CIO and/or network manager, HR, and others. In this scenario, the TCO of a network technology investment is weighed against the cost of not doing it at all and missing a corporate opportunity to decentralize operations, which can’t be accomplished without the technology that is needed to run it.


The coming hardware revolution: How to address AI’s insatiable demands

The US forecast for energy consumption on AI is alarming. Today’s AI queries require roughly 10x the electricity of traditional Google queries - a ChatGPT request runs 10x watt-hours versus a Google request. A typical CPU in a data center uses approximately 300 watts per hour (Electric Power Research Institute), while a Nvidia H100 GPU uses up to 700 watts per hour, a similar usage of an average household in the US per month. Advancements in AI model capabilities, and greater use of parameters, continue to drive energy consumption higher. Much of this demand is centralized in data centers as companies like Amazon, Microsoft, Google, and Meta build more and more massive hyperscale facilities all over the country. US data center electricity consumption is projected to grow 125 percent by 2030, using nine percent of all national electricity. ... While big tech companies certainly have the benefit of incumbency and funding advantage, the startup ecosystem will play an absolutely crucial role in driving the innovation necessary to enable the future of AI. Large public tech companies often have difficulty innovating at the same speed as smaller, more nimble startups.


Agents are the 'third wave' of the AI revolution

"Agentic AI will be the next wave of unlocked value at scale," Sesh Iyer, managing director and senior partner with BCG X, Boston Consulting Group's tech build and design unit, told ZDNET. ... As with both analytical and gen AI, AI agents need to be built with and run along clear ethical and operational guidelines. This includes testing to minimize errors and a governance structure. As is the case with all AI instances, due diligence to ensure compliance and fairness is also a necessity for agents, Iyer said. As is also the case with broader AI, the right skills are needed to design, build and manage AI agents, he continued. Such talent is likely already available within many organizations, with the domain knowledge needed, he added. "Upskill your workforce to manage and use agentic AI effectively. Developing internal expertise will be key to capturing long-term value from these systems." ... To prepare for the shift from gen AI to agentic AI, "start small and scale strategically," he advises. "Identify a few high-impact use cases -- such as customer service -- and run pilot programs to test and refine agent capabilities. Alongside these use cases, understand the emerging platforms and software components that offer support for agentic AI."


Having it both ways – bringing the cloud to on-premises data storage

“StaaS is an increasingly popular choice for organisations, with demand only likely to grow soon. The simple reason for this is two-fold: it provides both convenience and simplicity,” said Anthony Cusimano, Director of Technical Marketing at Object First, a supplier of immutable backup storage appliances. There is more than one flavour of on-premises StaaS, as was pointed out by A3 Communications panel member Camberley Bates, Chief Technology Advisor at IT research and advisory firm The Futurum Group. Bates pointed out that the two general categories of on-premises StaaS service are Managed and Non-Managed StaaS. Managed StaaS sees vendors handling the whole storage stack, by both implementing and then fully managing storage systems on customers’ premises. However, Bates said enterprises are more attracted to Non-Managed StaaS. ... “Non-managed StaaS has become surprisingly of interest in the market. This is because enterprises buy it ‘once’ and do not have to go back for a capex request over and over again. Rather, it becomes a monthly bill that they can true-up over time. We have found the fully managed offering of less interest, with enterprises opting to use their own resources to handle the storage management,” continued Bates.



Quote for the day:

“If you don’t try at anything, you can’t fail… it takes back bone to lead the life you want” -- Richard Yates

Daily Tech Digest - December 19, 2024

How AI-Empowered ‘Citizen Developers’ Help Drive Digital Transformation

To compete in the future, companies know they need more IT capabilities, and the current supply chain has failed to provide the necessary resources. The only way for companies to fill the void is through greater emphasis on the skill development of their existing staff — their citizens. Imagine two different organizations. Both have explicit initiatives underway to digitally transform their businesses. In one, the IT organization tries to carry the load by itself. There, the mandate to digitize has only created more demand for new applications, automations, and data analyses — but no new supply. Department leaders and digitally oriented professionals initially submitted request after request, but as the backlog grew, they became discouraged and stopped bothering to ask when their solutions would be forthcoming. After a couple of years, no one even mentioned digital transformation anymore. In the other organization, digital transformation was a broad organizational mandate. IT was certainly a part of it and had to update a variety of enterprise transaction systems as well as moving most systems to the cloud. They had their hands full with this aspect of the transformation. Fortunately, in this hypothetical company, many citizens were engaged in the transformation process as well. 


Things CIOs and CTOs Need To Do Differently in 2025

“Because the nature of the threat that organizations face is increasing all the time, the tooling that’s capable of mitigating those threats becomes more and more expensive,” says Logan. “Add to that the constantly changing privacy security rules around the globe and it becomes a real challenge to navigate effectively.” Also realize that everyone in the organization is on the same team, so problems should be solved as a team. IT leadership is in a unique position to help break down the silos between different stakeholder groups. ... CIOs and CTOs face several risks as they attempt to manage technology, privacy, ROI, security, talent and technology integration. According to Joe Batista, chief creatologist, former Dell Technologies & Hewlett Packard Enterprise executive, senior IT leaders and their teams should focus on improving the conditions and skills needed to address such challenges in 2025 so they can continue to innovate. “Keep collaborating across the enterprise with other business leaders and peers. Take it a step further by exploring how ecosystems can impact your business agenda,” says Batista. “Foster an environment that encourages taking on greater risks. The key is creating a space where innovation can thrive, and failures are steppingstones to success.”


5 reasons why 2025 will be the year of OpenTelemetry

OTel was initially targeted at cloud-native applications, but with the creation of a special interest group within OpenTelemetry focused on the continuous integration and continuous delivery (CI/CD) application development pipeline, OTel becomes a more powerful, end-to-end tool. “CI/CD observability is essential for ensuring that software is released to production efficiently and reliably,” according to project lead Dotan Horovits. “By integrating observability into CI/CD workflows, teams can monitor the health and performance of their pipelines in real-time, gaining insights into bottlenecks and areas that require improvement.” He adds that open standards are critical because they “create a common uniform language which is tool- and vendor-agnostic, enabling cohesive observability across different tools and allowing teams to maintain a clear and comprehensive view of their CI/CD pipeline performance.” ... The explosion of interest in AI, genAI, and large language models (LLM) is creating an explosion in the volume of data that is generated, processed and transmitted across enterprise networks. That means a commensurate increase in the volume of telemetry data that needs to be collected in order to make sure AI systems are operating efficiently.


The Importance of Empowering CFOs Against Cyber Threats

Today's CFOs must be collaborative leaders, willing to embrace an expanding role that includes protecting critical assets and securing the bottom line. To do this, CFOs must work closely with chief information security officers (CISOs), due to the sophistication and financial impact of cyberattacks. ... CFOs are uniquely positioned to understand the potential financial devastation from cyber incidents. The costs associated with a breach extend beyond immediate financial losses, encompassing longer-term repercussions, such as reputational damage, legal liabilities, and regulatory fines. CFOs must measure and consider these potential financial impacts when participating in incident response planning. ... The regulatory landscape for CFOs has evolved significantly beyond Sarbanes-Oxley. The Securities and Exchange Commission's (SEC's) rules on cybersecurity risk management, strategy, governance, and incident disclosure have become a primary concern for CFOs and reflect the growing recognition of cybersecurity as a critical financial and operational risk. ... Adding to the complexity, the CFO is now a cross-functional collaborator who must work closely with IT, legal, and other departments to prioritize cyber initiatives and investments. 


Community Banks Face Perfect Storm of Cybersecurity, Regulatory and Funding Pressures

Cybersecurity risks continue to cast a long shadow over technological advancement. About 42% of bankers expect cybersecurity risks to pose their most difficult challenge in implementing new technologies over the next five years. This concern is driving many institutions to take a cautious approach to emerging technologies like artificial intelligence. ... Banks express varying levels of satisfaction with their technology services. Asset liability management and interest rate risk technologies receive the highest satisfaction ratings, with 87% and 84% of respondents respectively reporting being “extremely” or “somewhat” satisfied. However, workflow processing and core service provider services show room for improvement, with less than 70% of banks expressing satisfaction with these areas. ... Compliance costs continue to consume a significant portion of bank resources. Legal and accounting/auditing expenses related to compliance saw notable increases, with both categories rising nearly 4 percentage points as a share of total expenses. The implementation of the current expected credit loss (CECL) accounting standard has contributed to these rising costs.


Dark Data Explained

Dark data often lies dormant and untapped, its value obscured by poor quality and disorganization. Yet within these neglected reservoirs of information lies the potential for significant insights and improved decision-making. To unlock this potential, data cleaning and optimization become vital. Cleaning dark data involves identifying and correcting inaccuracies, filling in missing entries, and eliminating redundancies. This initial step is crucial, as unclean data can lead to erroneous conclusions and misguided strategies. Optimization furthers the process by enhancing the usability and accessibility of the data. Techniques such as data transformation, normalization, and integration play pivotal roles in refining dark data. By transforming the data into standardized formats and ensuring it adheres to consistent structures, companies and researchers can more effectively analyze and interpret the information. Additionally, integration across different data sets and sources can uncover previously hidden patterns and relationships, offering a comprehensive view of the phenomenon being studied. By converting dark data through meticulous cleaning and sophisticated optimization, organizations can derive actionable insights and add substantial value. 


In potential reversal, European authorities say AI can indeed use personal data — without consent — for training

The European Data Protection Board (EDPB) issued a wide-ranging report on Wednesday exploring the many complexities and intricacies of modern AI model development. It said that it was open to potentially allowing personal data, without owner’s consent, to train models, as long as the finished application does not reveal any of that private information. This reflects the reality that training data does not necessarily translate into the information eventually delivered to end users. ... “Nowhere does the EDPB seem to look at whether something is actually personal data for the AI model provider. It always presumes that it is, and only looks at whether anonymization has taken place and is sufficient,” Craddock wrote. “If insufficient, the SA would be in a position to consider that the controller has failed to meet its accountability obligations under Article 5(2) GDPR.” And in a comment on LinkedIn that mostly supported the standards group’s efforts, Patrick Rankine, the CIO of UK AI vendor Aiphoria, said that IT leaders should stop complaining and up their AI game. “For AI developers, this means that claims of anonymity should be substantiated with evidence, including the implementation of technical and organizational measures to prevent re-identification,” he wrote, noting that he agrees 100% with this sentiment. 


Software Architecture and the Art of Experimentation

While we can’t avoid being wrong some of the time, we can reduce the cost of being wrong by running small experiments to test our assumptions and reverse wrong decisions before their costs compound. But here time is the enemy: there is never enough time to test every assumption and so knowing which ones to confront is the art in architecting. Successful architecting means experimenting to test decisions that affect the architecture of the system, i.e. those decisions that are "fatal" to the success of the thing you are building if you are wrong. ... If you don’t run an experiment you are assuming you already know the answer to some question. So long as that’s the case, or so long as the risk and cost of being wrong is small, you may not need to experiment. Some big questions, however, can only be answered by experimenting. Since you probably can’t run experiments for all the questions you have to answer, implicitly accepting the associated risk, so you need to make a trade-off between the number of experiments you can run and the risks you won’t be able to mitigate by experimenting. The challenge in creating experiments that test both the MVP and MVA is asking questions that challenge the business and technical assumptions of both stakeholders and developers. 


5 job negotiation tips for CAIOs

As you discuss base, bonus, and equity, be specific and find out exactly what their pay range actually is for this emerging role and how that compares with market rates for your location. For example, some recruiters may give you a higher number early on in discussions, and then once you’re well bought-in to the company after several interviews, the final offer may throttle things back. ... Set clear expectations early, and be prepared to withdraw your candidacy if any downward-revised amount later on is too far below your household needs. ... As a CAIO, you don’t want to be measured the same as the lines of business, or penalized if they fall short of quarterly or yearly sales targets. Ensure your performance metrics are appropriate for the role and the balance you’ll need to strike between near-term and longer-term objectives. For certain, AI should enable near-term productivity improvements and cost savings, but it should also enable longer-term revenue growth via new products and services, or enhancements to existing offerings. ... Companies sometimes place a clause in their legal agreement that states they own all pre-existing IP. Get that clause removed and itemize your pre-existing IP if needed to ensure it stays under your ownership. 


Leadership skills for managing cybersecurity during digital transformation

First, security must be top of mind as all new technologies are planned. As you innovate, ensure that security is built into deployments, and options chosen that match your business risk profile and organization’s values. For example, consider enabling the max security features that come with many IoT, such as forcing the change of default passwords, patching devices and ensuring vulnerabilities can be addressed. Likewise, ensure that AI applications are ethically sound, transparent, and do not introduce unintended biases. Second, a comprehensive risk assessment should be performed on the current network and systems environment as well as on the future planned “To Be” architecture. ... Digital transformation also demands leaders who are not only technically adept but also visionary in guiding their organizations through change. Leaders must be able to inspire a digital culture, align teams with new technologies, and drive strategic initiatives that leverage digital capabilities for competitive advantage. Finally, leaders must be life-long learners who constantly update their skills and forge strong relationships across their organzation for this new digitally-transformed environment.



Quote for the day:

"Don’t watch the clock; do what it does. Keep going." -- Sam Levenson

Daily Tech Digest - December 18, 2024

The AI-Powered IoT Revolution: Are You Ready?

AI not only reduces the cost and latency of these operations but also provides actionable intelligence, enabling smarter decisions that enhance business efficiency by preventing downtimes, minimizing losses, improving sales, and unlocking a range of benefits tailored to specific use cases. Building on this synergy, AI on Edge—where AI processes run directly on edge devices such as IoT sensors, cameras, and smartphones rather than relying solely on cloud computing—will see significant adoption by 2025. By processing data locally, edge AI enables real-time decision-making, eliminating delays caused by data transmission to and from the cloud. This capability will transform applications like autonomous vehicles, industrial automation, and healthcare devices, where fast, reliable responses are mission-critical. Moreover, AI on Edge enhances privacy and security by keeping sensitive data on the device, reduces cloud costs and bandwidth usage, and supports offline functionality in remote or connectivity-limited environments. These advantages make it an attractive option for organizations seeking to push the boundaries of innovation while delivering superior user experiences and operational efficiency. 


Key strategies to enhance cyber resilience

To bolster resilience consider developing stakeholder specific playbooks, Wyatt says. Different teams play different roles in incident response from detecting risk, deploying key controls, maintaining compliance, recovery and business continuity. Expect that each stakeholder group will have their own requirements and set of KPIs to meet, she says. “For example, the security team may have different concerns than the IT operations team. As a result, organizations should draft cyber resilience playbooks for each set of stakeholders that provide very specific guidance and ROI benefits for each group.” ... Cyber resilience is as much about the ability to recover from a major security incident as it is about proactively preparing, preventing, detecting and remediating it. That means having a formal disaster recovery plan, doing regular offsite back-ups of all critical systems and testing both the plan and the recovery process on a frequent basis. ... Boards have become very focused on managing risk and have become increasingly fluent in cyber risk. But many boards are surprised that when a crisis occurs, broader operational resilience is not a point of these discussions, according to Wyatt. Bring your board along by having external experts walk through previous events and break down the various areas of impact. 


Smarter devops: How to avoid deployment horrors

Finding security issues post-deployment is a major risk, and many devops teams shift-left security practices by instituting devops security non-negotiables. These are a mix of policies, controls, automations, and tools, but most importantly, ensuring security is a top-of-mind responsibility for developers. ... “Integrating security and quality controls as early as possible in the software development lifecycle is absolutely necessary for a functioning modern devops practice,” says Christopher Hendrich, associate CTO of SADA. “Creating a developer platform with automation, AI-powered services, and clear feedback on why something is deemed insecure and how to fix it helps the developer to focus on developing while simultaneously strengthening the security mindset.” ... “Software development is a complex process that gets increasingly challenging as the software’s functionality changes or ages over time,” says Melissa McKay, head of developer relations at JFrog. Implementing a multilayered, end-to-end approach has become essential to ensure security and quality are prioritized from initial package curation and coding to runtime monitoring.”


How Do We Build Ransomware Resilience Beyond Just Backups?

While email filtering tools are essential, it’s unrealistic to expect them to block every malicious message. As such, another important step is educating your end users on identifying phishing emails and other suspicious content that make it through the filters. User education is one of those things that should be an ongoing effort, not a one-time initiative. Regular training sessions help reinforce best practices and keep security in focus. To complement training, consider using phishing attack simulators. Several vendors offer tools that generate harmless, realistic-looking phishing messages and send them to your users. Microsoft 365 even includes a phishing simulation tool. ... Limiting user permissions is vital because ransomware operates with the permissions of the user who triggers the attack. As such, users should only have access to the resources they need to perform their jobs—no more, no less. If a user doesn’t have access to a specific resource, the ransomware won’t be able to encrypt it. Moreover, consider isolating high-value data on storage systems that require additional authentication. Doing so reduces exposure if ransomware spreads.


Azure Data Factory Bugs Expose Cloud Infrastructure

The Airflow instance's use of default, unchangeable configurations combined with the cluster admin role's attachment to the Airflow runner "caused a security issue" that could be manipulated "to control the Airflow cluster and related infrastructure," the researchers explained. If an attacker was able to breach the cluster, they also could manipulate Geneva, allowing attackers "to potentially tamper with log data or access other sensitive Azure resources," Unit 42 AI and security research manager Ofir Balassiano and senior security researcher David Orlovsky wrote in the post. Overall, the flaws highlight the importance of managing service permissions and monitoring the operations of critical third-party services within a cloud environment to prevent unauthorized access to a cluster. ... Attackers have two ways to gain access to and tamper with DAG files. They could gain write permissions to the storage account containing DAG files by leveraging a principal account with write permissions; or they could use a shared access signature (SAS) token, which grants temporary and limited access to a DAG file. In this scenario, once a DAG file is tampered with, "it lies dormant until the DAG files are imported by the victim," the researchers explained. The second way is to gain access to a Git repository using leaked credentials or a misconfigured repository.


Whatever happened to the three-year IT roadmap?

“IT roadmaps are now shorter, typically not exceeding two years, due to the rapid pace of technological change,” he says. “This allows for more flexibility and adaptability in IT planning.” Kellie Romack, chief digital information officer of ServiceNow, is also shortening her horizon to align with the two- or three-year timeframe that is the norm for her company. Doing so keeps her focused on supporting the company’s overall future strategy but with enough flexibility to adjust along the journey. “That timeframe is a sweet spot that allows us to set a ‘dream big’ strategy with room to be agile, so we can deliver and push the limits of what’s possible,” she says. “The pace of technological change today is faster than it’s ever been, and if IT leaders aren’t looking around the corner now, it’s possible they’ll fall behind and never catch up.” ... “A roadmap is still a useful tool to provide that north star, the objectives and the goals you’re trying to achieve, and some sense of how you’ll get to those goals,” McHugh says. Without that, McHugh says CIOs won’t consistently deliver what’s needed when it’s needed for their organizations, nor will they get IT to an optimal advanced state. “If you don’t have a goal or an outcome, you’re going to go somewhere, we can promise you that, but you’re not going to end up in a specific location,” she adds.


Innovations in Machine Identity Management for the Cloud

Non-human identities are critical components within the digital landscape. They enable machine-to-machine communications, providing an array of automated services that underpin today’s digital operations. However, their growing prevalence means they are also becoming prime targets for cyber threats. Are existing cybersecurity strategies equipped to address this issue? Acting as agile guardians, NHI management platforms offer promising solutions, securing both the identities and their secrets from potential threats and vulnerabilities. By placing equal emphasis on the management of both human and non-human identities, businesses can create a comprehensive cybersecurity strategy that matches the complexity and diversity of today’s digital threats. ... When unsecured, NHIs become hotbeds for cybercriminals who manipulate these identities to procure unauthorized access to sensitive data and systems. For companies regularly transacting in consumer data (like in healthcare or finance), the unauthorized access and sharing of sensitive data can lead to hefty penalties due to non-compliant data management practices. An effective NHI management strategy acts as a pivotal control over cloud security. 


From Crisis to Control: Establishing a Resilient Incident Response Framework for Deployed AI Models

An effective incident response framework for frontier AI companies should be comprehensive and adaptive, allowing quick and decisive responses to emerging threats. Researchers at the Institute for AI Policy and Strategy (IAPS) have proposed a post-deployment response framework, along with a toolkit of specific incident responses. The proposed framework consists of four stages: prepare, monitor and analyze, execute, and recovery and follow up. ... Developers have a variety of actions available to them to contain and mitigate the harms of incidents caused by advanced AI models. These tools offer a variety of response mechanisms that can be executed individually or in combination with one another, allowing developers to tailor specific responses based on the incident's scope and severity. ... Frontier AI companies have recently provided more transparency to their internal policies regarding safety, including Responsible Scaling Policies (RSPs) published by Anthropic, Google DeepMind, and OpenAI. When it comes to responding to post-deployment incidents, all three RSPs lack clear, detailed, and actionable plans for responding to post-deployment incidents. 


We’re Extremely Focused on Delivering Value Sustainably — NIH CDO

Speaking of challenges in her role as the CDO, Ramirez highlights managing a rapidly growing data portfolio. She stresses the importance of fostering partnerships and ensuring the platform’s accessibility to those aiming to leverage its capabilities. One of the central hurdles has been effectively communicating the portfolio’s offerings and predicting data availability for research purposes. She describes the critical need to align funding and partnerships to support delivery timelines of 12 to 24 months, a task that demanded strong leadership from the coordinating center. This dual role of ensuring readiness and delivery has been both a challenge and a success. Ramirez shares that the team has grown more adept at framing research data as a product of their system, ready to meet the needs of collaborators. She also expresses enthusiasm for working with partners to demonstrate the platform’s benefits and efficiencies in advancing research objectives. Sharing AI literacy and upskilling initiatives in the organization, Ramirez mentions building a strong sense of community among data professionals. She highlights efforts to establish a community of practice that brings together individuals working in their federal coordinating center and awardees who specialize in data science and systems.


5 Questions Your Data Protection Vendor Hopes You Don’t Ask

Data protection vendors often rely on high-level analysis to detect unusual activity in backups or snapshots. This includes threshold analysis, identifying unusual file changes, or detecting changes in compression rates that may suggest ransomware encryption. These methods are essentially guesses prone to false positives. During a ransomware attack, details matter. ... Organizations snapshot or back up data regularly, ranging from hourly to daily intervals. When an attack occurs, restoring a snapshot or backup overwrites production data—some of which may have been corrupted by ransomware—with clean data. If only 20% of the data in the backup has been manipulated by bad actors, recovering the full backup or snapshot will result in overwriting 80% of data that did not need restoration. ... Cybercriminals understand that databases are the backbone of many businesses, making them prime targets for extortion. By corrupting these databases, they can pressure organizations into paying ransoms. ... AI is now a mainstream topic, but understanding how an AI engine is trained is critical to evaluating its effectiveness. When dealing with ransomware, it's important that the AI is trained on real ransomware variants and how they impact data.



Quote for the day:

"The essence of leadership is the capacity to build and develop the self-esteem of the workers." -- Irwin Federman

Daily Tech Digest - December 17, 2024

Together For Good: How Humans And AI Can Close The Health Gap

While the potential is immense, AI’s effectiveness in closing the health gap hinges on more than just technological advancement. AI must be deliberately tailored, trained, tested, and targeted to bring out the best in and for people and the planet. This means anchoring AI development and deployment in a holistic understanding of humans, and the environment they evolve in. It also entails the design of ethical frameworks, transdisciplinary collaboration, and 360-degree strategies that systematically bring out the complementarity of AI and NI, including the knowledge, experience, and intuition of humans. ... Closing the gap of preventable health inequalities cannot be achieved by advanced algorithms alone. It requires us to integrate the strengths of artificial intelligence with natural intelligence — the knowledge, ethical judgment, empathy, and cultural understanding of human beings — to ensure that solutions are both effective and just. By anchoring AI in localized insight and human expertise, we can align personal health improvements (micro) with community-led action (meso), informed national policies (macro), and globally coordinated strategies (meta), delivering equitable outcomes in every arena of the organically evolving kaleidoscope that we are part of.


How to Take a Security-First Approach to AI Implementation

Whether it's a third-party tool or an in-house project, thorough research and a clear plan will go a long way toward reducing risks. When developing guidelines for AI implementation, the first step is to match the business case with available tools, remembering that some models are more suited to specific tasks than others. Practicing a Secure by Design strategy from the ground up can future-proof AI implementation. These principles ensure that security is prioritized throughout the entire lifecycle of an AI product. A Secure by Design methodology implements multiple layers of defense against cyberthreats. During the planning stage, the security team's input is critical for a Secure by Design approach. Vendor trust is also vital. Evaluating vendors for trustworthiness and auditing contracts thoroughly, including regular monitoring of updates to vendor terms and conditions, are imperative. It is essential for data quality to be assessed for metrics like accuracy, relevance, and completeness.... Keeping security at the forefront from the get-go confers advantages, especially as tools and risks evolve. Safer AI is on the horizon as more users adhere to best practices through regulatory frameworks, international collaborations, and security-first use cases. 


Data Governance in DevOps: Ensuring Compliance in the AI Era

Implementing effective CI/CD pipeline governance in the age of AI requires a multifaceted approach. It starts with establishing clear policies outlining compliance requirements, security standards, and ethical guidelines for AI development. These policies should be embedded into the pipeline through automated checks and gates. Leveraging advanced automation tools for continuous compliance checking throughout the pipeline is essential. These tools can scan code for vulnerabilities, check for adherence to coding standards, and even analyze AI models for potential biases or unexpected behaviors. Robust version control and change management processes are also crucial components of pipeline governance. They ensure that every change to the codebase or AI model is tracked, reviewed, and approved before progressing through the pipeline. We can't forget logging and auditing. Comprehensive logging and monitoring of all pipeline activities provide the necessary audit trails for compliance demonstration and post-incident analysis. In the context of AI, this extends to monitoring deployed models for performance drift or unexpected behaviors, ensuring ongoing compliance post-deployment. 


Top 10 Cloud Data Center Stories of 2024

If you work in the data center industry, you may use term on-premise (or on-prem) frequently. But have you ever stopped to wonder how the phrase entered the data center lexicon – or considered why on-premise doesn’t make grammatical sense? In a nutshell, the answer is that it should be on-premises – note the s on the end – because premise and premises are different words. If not, you’ll be enlightened by our coverage of the history of the term on-prem and why it has long irked certain CIOs. ... The more complex your cloud architecture becomes, the harder it is to identify security risks and other misconfigurations. That’s why the ability to automate security assessments is growing increasingly important. But how good are the solutions that cloud providers offer for this purpose? To find out, we took a close look at compliance reporting tools from Azure and GCP. The takeaway was that these solutions can automate much of the work necessary to identify misconfigurations that could trigger compliance violations, but they’re no substitute for human experts. ... What was less often discussed – but equally important – is the role of edge infrastructure in AI. That’s what we focused on in our report about edge AI, meaning AI workloads that run at the network edge instead of in traditional cloud data centers.


Clop Ransomware Takes Responsibility for Cleo Mass Exploits

Whether or not Clop is actually responsible for attacks targeting various types of Cleo's MFT software couldn't be confirmed. Separately, on Dec. 10, British cybersecurity expert Kevin Beaumont reported having evidence that the ransomware group Termite possessed a zero-day exploit for vulnerabilities in the Cleo products. Security experts said both groups may well have been involved, either separately or together. "Although Cl0p posted a message on their website, this is not hard evidence pointing to a single threat group's involvement. Therefore, any discussion of whether Termite or Cl0p are behind this exploit is speculation until proven with other indicators/evidence," said Christiaan Beek, senior director of threat analytics at cybersecurity firm Rapid7. "We have seen Cl0p utilize complex chains similar to this vulnerability in multiple file transfer use cases before, such as MOVEit and Accellion FTA in 2021," Beek added.  ... The latest attacks appear to target in part CVE-2024-50623, an unrestricted file upload vulnerability in the managed file transfer products Cleo Harmony, VLTrader and LexiCom. Exploiting the vulnerability enables attackers to remotely execute code with escalated privileges.


Balancing security and user experience to improve fraud prevention strategies

There may not be one right way of handling the balance of security and user-friendly customer experience. Different institutions and their customers will have different needs, and processes might vary somewhat. But overall, there should be clear, easy-to-follow standards and checkpoints built into whatever financial institutions do. For instance, some banks or credit card companies may allow customers to institute their own stop gap for purchases over a certain amount, which may reduce the incentive for relatively large-scale fraud. These companies could also introduce some level of personalization into the processes, like how a credit or debit card could be easily turned on and turned off by customers themselves via an app or site. ... Meanwhile, it seems like barely a day goes by when there’s not some coverage of fraud or a release of personal info via hacking from some corporation, and some speculate increasingly advanced technology may make it easier for those who want to perpetrate fraud. With this in mind, there may be a greater emphasis placed on enhancing security and experimentation in what different institutions do to find what works best and to have a process in place that allows customers to have confidence in their banks and credit card companies.


Generative AI Is Just the Beginning — Here’s Why Autonomous AI is Next

Embracing this technology will unlock significant opportunities to improve organizational efficiency and accuracy. But before we dive into this, let us start with some definitions. Autonomous AI refers to systems that can perform tasks without human intervention. In contrast, generative AI systems focus on content creation based on existing data. What sets autonomous AI apart is its ability to self-manage. Understanding this difference is crucial, enabling organizations to use AI for more complex operations like predictive maintenance and resource optimization. ... The first step in successfully integrating autonomous AI into your organization is implementing robust data governance frameworks to support these advanced systems. Establish clear data privacy and transparency guidelines to ensure autonomous AI operates within ethical boundaries. It’s crucial to incorporate technical controls that prevent the AI from making reckless decisions, aligning its actions with your organizational values. ... When exploring the future of autonomous AI within your organization, it’s crucial to monitor and evaluate your autonomous AI systems regularly. Continuous assessment allows you to understand how the AI is performing and identify potential improvement areas.


Privacy by design approach drives business success in today’s digital age

Businesses that adhere to data privacy practices validate the upkeep of customer data and data privacy, earning them a stronger brand reputation. They should also ensure privacy is embedded in the organisation’s framework across the technology, products, and services, which is also known as Privacy by Design (PbD). ... The PbD framework was developed by Dr. Ann Cavoukian, Information & Privacy Commissioner of Ontario jointly with the Dutch Data Protection Authority and the Netherlands Organisation for Applied Scientific Research in 1995. It aimed to cultivate and embed privacy defences to safeguard data in the design process of a product, service, or system. Privacy becomes the default setting built at the very beginning rather than an afterthought. This framework is founded on seven core principles: being proactive and not reactive, having privacy as the default setting, having privacy embedded into design, full functionality, end-to-end security, visibility and transparency, and respect for user privacy. ... The PbD approach which is proactive indicates the company’s commitment to protecting the customer’s sensitive personal information. PbD enables companies to have personalised engagement with customers while respecting their privacy preferences.


Top 10 cybersecurity misconfigurations: Nail the setup to avoid attacks

Despite the industry-wide buzz about things like zero-trust, which is rooted in concepts such as least-privileged access control, this weakness still runs rampant. CISA’s publication calls out excessive account privileges, elevated service accounts, and non-essential use of elevated accounts. Anyone who has worked in IT or cyber for some time knows that many of these issues can be traced back to human behavior and the general demands of working in complex environments. ... Another fundamental security control that makes an appearance is the need to segment networks, a practice again that ties to the broader push for zero trust. By failing to segment networks, organizations are failing to establish security boundaries between different systems, environments, and data types. This allows malicious actors to compromise a single system and move freely across systems without encountering friction and additional security controls and boundaries that could impede their nefarious activities. The publication specifically calls out challenges where there is a lack of segmentation between IT and OT networks, putting OT networks at risk, which have real-world implications around security and safety in environments such as industrial control systems.


Why Indian enterprises are betting big on hybrid multi-cloud strategies?

The multi-cloud strategy in India is deeply intertwined with the country’s broader digital transformation initiatives. The Government of India’s Digital India program and initiatives like the National Cloud Initiatives are providing a robust framework for cloud adoption. ... The importance of edge computing is growing, and the rollout of 5G is opening up new possibilities for distributed cloud architectures. Telecom titans like Jio and Airtel are investing substantially in cloud-native infrastructure, creating ripple effects throughout industries. On the other hand, startup ecosystems play a crucial role too. Bangalore, often called the Silicon Valley of India, has become a hotbed for cloud-native technologies. Companies and numerous cloud consulting firms are developing cutting-edge multi-cloud solutions that are gaining global recognition. Foreign investments are pouring in. Major cloud providers like AWS, Microsoft Azure, and Google Cloud are expanding their infrastructure in India, with dedicated data centers that meet local compliance requirements. This local presence is critical for enterprises concerned about data sovereignty and latency.



Quote for the day:

"You aren’t going to find anybody that’s going to be successful without making a sacrifice and without perseverance." -- Lou Holtz

Daily Tech Digest - December 16, 2024

What IT hiring looks like heading into 2025

AI isn’t replacing jobs so much as it is reshaping the nature of work, said Elizabeth Lascaze, a principal in Deloitte Consulting’s Human Capital practice. She, too, sees evidence that entry-level roles focused on tasks like note-taking or basic data analysis are declining as organizations seek more experienced workers for junior positions. “Today’s emerging roles require workers to quickly leverage data, generate insights, and solve problems,” she said, adding that those skilled in using AI, such as cybersecurity analysts applying AI for threat detection, will be highly sought after. Although the adoption of AI has led to some “growing pains,” many workers are actually excited about it, Lascaze said, with most employees believing it will create new jobs and enhance their careers. “Our survey found that just 24% of early career workers and 14% of tenured workers fear their jobs will be replaced by AI,” Lascaze said. “Tenured workers are more likely to lead organizational strategy, so they may prioritize AI’s potential to improve efficiency, sophistication, and work quality in existing roles rather than AI’s potential to eliminate certain positions. “These workers reported being slightly more focused on building AI fluency than early-career employees,” Lascaze said. 


The Future of AI (And Travel) Relies on Synthetic Data

Synthetic data enhances accuracy and fairness in AI models as organic data can be biased or unbalanced, leading to ML models failing to represent diverse populations accurately. With synthetic data, researchers can create datasets that more accurately reflect the demographics they intend to serve, thereby minimizing biases and improving overall model robustness. ... Synthetic data can be a double-edged sword. While it addresses data privacy and availability challenges, it can inadvertently carry or magnify biases embedded in the original dataset. When source data is flawed, those imperfections can cascade into the synthetic version, skewing results — a critical concern in high-stakes domains like healthcare and finance, where precision and fairness are paramount. To counteract this, having a human in the loop is super important. While there’s a temptation to use synthetic data to fill in every gap for better accuracy and fairness, we understood that running synthetic searches for every flight combination possible globally for our price tracking and predictions feature could overwhelm our booking system and impact real travelers organically searching for flights. Synthetic data has limitations that go beyond bias. 


9 Cloud Service Adoption Trends

Most organizations are building modern cloud computing applications to enable greater scalability while reducing cost and consumption costs. They’re also more focused on the security and compliance of cloud systems and how providers are validating and ensuring data protection. “Their main focus is really around cost, but a second focus would be whether providers can meet or exceed their current compliance requirements,” says Will Milewski, SVP of cloud infrastructure and operations at content management solution provider Hyland. ... There’s a fundamental shift in cloud adoption patterns, driven largely by the emergence of AI and ML capabilities. Unlike previous cycles focused primarily on infrastructure migration, organizations are now having to balance traditional cloud ROI metrics with strategic technology bets, particularly around AI services. According to Kyle Campos, chief technology and product officer at cloud management platform provider CloudBolt Software, this evolution is being catalyzed by two major forces: First, cloud providers are aggressively pushing AI capabilities as key differentiators rather than competing on cost or basic services. Second, organizations are realizing that cloud strategy decisions today have more profound implications for future innovation capabilities than ever before.


We’ve come a long way from RPA: How AI agents are revolutionizing automation

As the AI ecosystem evolves, a significant shift is occurring toward vertical AI agents — highly specialized AI systems designed for specific industries or use cases. As Microsoft founder Bill Gates said in a recent blog post: “Agents are smarter. They’re proactive — capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. “ Unlike traditional software-as-a-service (SaaS) models, vertical AI agents do more than optimize existing workflows; they reimagine them entirely, bringing new possibilities to life. ... The most profound shift in the automation landscape is the transition from RPA to multi-agent AI systems capable of autonomous decision-making and collaboration. According to a recent Gartner survey, this shift will enable 15% of day-to-day work decisions to be made autonomously by 2028. These agents are evolving from simple tools into true collaborators, transforming enterprise workflows and systems. ... As AI agents progress from handling tasks to managing workflows and entire jobs, they face a compounding accuracy challenge. Each additional step introduces potential errors, multiplying and degrading overall performance. 


8 reasons why digital transformations still fail

“People got really excited about, ‘We’re going to transform,’” Woerner says, but she believes part of the problem lies with leaders who “didn’t have the discipline to make the hard choices early on” to get employee buy-in. Ranjit Varughse, CIO of automotive paint and equipment firm Wesco Group, agrees. “The first challenge is getting digital transformation buy-in from teams at the outset. People are creatures of habit, making many hesitant to change their existing systems and processes,” he says. “Without a clear change management strategy to get a team aligned, ERP implementations in particular can be slow, stall, or even fail entirely.” ... Digital transformation isn’t a technology problem, it’s about understanding how people actually work, not how we think they should work, Wei says. “At PropertySensor, we scrapped our first version after realizing real estate agents needed mobile-first solutions, not desktop dashboards,” he says. ... “People, process, and technology” is a common phrase technology leaders use when discussing the critical elements of a transformation. “But the real focus should be people, people, people,” echoes Megan Williams, vice president of global technology strategy and transformation at TransUnion.


How companies can address bias and privacy challenges in AI models

Companies understand that AI adoption is existential to their survival, with the winners of tomorrow being determined by their ability to harness AI effectively. Furthermore, they understand that their brand’s reputation is one of their most valuable assets. Missteps with AI—especially in mission-critical contexts (think of a trading algorithm going awol, a breach of user privacy, or a failure to meet safety standards)—can erode public trust and harm a company’s bottom line. This could have dire consequences. With a company’s competitiveness and potentially its very survival at stake, AI governance becomes a business imperative that they cannot afford to ignore. ... Certainly, we see a lot of activity from the government – both at the state and federal levels – which is creating a fragmented approach. We also see leading companies who understand that adopting AI is crucial to their future and want to move fast. They are not waiting for the regulatory environment to settle and are taking a leadership position in adopting responsible AI principles to safeguard their brand reputations. So, I believe companies will act intelligently out of self-interest to accelerate their AI initiatives and increase business returns. 


Ensuring AI Accountability Through Product Liability: The EU Approach and Why American Businesses Should Care

In terms of a substantive law regulating AI (which can be the basis of the causality presumption under the proposed AI Liability Directive), the European Union’s Artificial Intelligence Act (AI Act) entered into force on August 1, 2024, becoming the first comprehensive legal framework for AI globally. The AI Act applies to providers and developers of AI systems that are marketed or used within the EU (including free-to-use AI technology), regardless of whether those providers or developers are established in the EU or a separate country. The EU AI Act sets forth requirements and obligations for developers and deployers of AI systems in accordance with risk-based classification system and a tiered approach to governance, which are two of the most innovative features of the AI Act. The Act classifies AI applications into four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. AI systems deemed to pose an unacceptable risk, such as those that violate fundamental rights, are outright banned. ... High-risk AI systems, which include areas such as health care, law enforcement, and critical infrastructure, will face stricter regulatory scrutiny and must comply with rigorous transparency, data governance, and safety protocols. 


Agentic AI is evolving into specialised assistants, enabling the workforce to focus on value-adding tasks

A structured discovery approach is required to identify high impact areas for AI adoption rather than siloed use-cases. Infosys Topaz comprises verticalised blueprints, industry catalogues and strategic AI value map analysis capabilities. We have created playbooks for industries that lay out a structured roadmap to embed and mature GenAI into core processes and operations and across the IT landscape. This includes the right use-cases across the value stream spanning operations, customer experience, research and development, etc. As part of our Responsible AI by Design approach, we implement robust technical and process guardrails to ensure privacy and security. These include impact assessments, audits, automated policy enforcement, monitoring tools, and runtime safeguards to filter inputs and outputs for generative AI. We also use red-teaming and advanced testing tools to identify vulnerabilities and fortify AI models. Additionally, we employ privacy-preserving techniques such as Homomorphic Encryption and Secure Multi-Party Computation to enhance the security and resilience of our AI solutions. ... AI-driven monitoring tools detect inefficiencies in IT infrastructure, leveraging predictive analytics and forecasting techniques to improve utilisation in real time.


Security leaders top 10 takeaways for 2024

One of the most significant new rules, which has received the lion’s share of press attention, is the ‘materiality’ component, or the need to report “material” cybersecurity incidents to the SEC within four business days of discovery. At issue is whether the incident led to significant risk to the organization and its shareholders. If so, it’s defined as material and must be reported within four days of this determination being made (not its initial discovery). “Materiality extends beyond quantitative losses, such as direct financial impacts, to include qualitative aspects, like reputational damage and operational disruptions,” he says. McGladrey says the SEC’s materiality guidance underscores the importance of investor protection in relation to cybersecurity events and, if in doubt, the safest path is reporting. “If a disclosure is uncertain, erring on the side of transparency safeguards shareholders,” he tells CSO. ... As a virtual or fractional CISO service, Sage has observed startups engaging vCISO services earlier, in pre-seed and Series A stage and, in some cases, before they’ve finalized their minimum viable product. “Small technology consulting and boutique software development groups are looking for ISO 27001 certifications to ensure they can continue serving their larger customers,” she tells CSO.


Emotional intelligence in IT management: Impact, challenges, and cultural differences

While delivering results is the primary goal of any leader, you can’t forget that you’re managing people, not machines. Emotional intelligence helps balance the need for productivity with fairness and empathy. One way to illustrate this balance is through handling difficult conversations about career moves. Managing a team of over 100 support specialists for several years gave me the opportunity to conduct an interesting experiment. Many employees tend to hide the fact that they are exploring job opportunities elsewhere until the last minute. This creates unnecessary tension and can lead to higher turnover. However, if a manager removes the stigma around job interviews and treats them as part of market research, it encourages open communication. ... Emotionally intelligent managers possess the ability to identify the core of a conflict without letting it escalate. Attempting to gather every single piece of information is not always helpful. Instead, managers should focus on resolving conflicts, as often the solution is already within the team. This does not mean conducting surveys or asking for feedback from each person, as delicate situations require a more refined approach. A manager should observe, analyze, and extract the most significant points quickly and intuitively, enabling conflict resolution before it grows into a larger issue.



Quote for the day:

“Things come to those who wait, but only the things left by those who hustle” -- Abraham Lincoln