Daily Tech Digest - October 24, 2024

The power of prime numbers in computing

Another interesting area where primes pop up in coding is creating hash functions. In a hash function, the primary job is to take an input and transform it into a number that stands in its place. The number is a reduction of the overall input, and this fact makes it useful for many things like checksums and structures like hashtables. Hashing for a hashtable (the hash function for the object being placed into the collection; i.e., Java’s hashCode) uses a modulo of a constant, and that constant is recommended to be a prime. In that case, using a prime for the constant can help reduce the likelihood of collisions. That’s because the primeness of the number makes for a more even distribution of modulus, because there are fewer common denominators with the hashtable’s function. For the same reason, a prime on the hashtable “bucket count” helps prevent asymmetric collisions. In essence, using primes on the hashing constant and bucket count helps to ensure a good random distribution of items in buckets by reducing the likelihood of significant relationships between the two. ... Now let’s flip things around a bit and look at how coding helps us handle and understand one of the classic problems of math: discovering primeness. An ancient algorithm was described by Eratosthenes, working in the 3rd century BC. 


New research reveals AI adoption on rise, but challenges remain in data governance and ROI realisation

Commenting on the survey, Noshin Kagalwalla, Vice President & Managing Director, SAS India, said: “Indian companies are undoubtedly making progress in AI adoption, but significant work remains. The challenge lies not only in deploying AI but also in a way that it is trustworthy, scalable, and aligned with long-term business objectives. Strategic investments in data governance and AI infrastructure will be crucial to driving sustainable AI performance across industries in India.” “The disparity in target outcomes between AI Leaders and AI Followers demonstrates a lack of clear strategy and roadmap. Where AI Followers are focused on short-term, productivity-based results, AI Leaders have moved beyond these to more complex functional and industry use cases,” said Shukri Dabaghi, Senior Vice President, Asia Pacific and EMEA Emerging at SAS. “As businesses look to capitalise on the transformative potential of AI, it’s important for business leaders to learn from the differences between an AI Leader and an AI Follower. Avoiding a ‘gold rush’ way of thinking ensures long-term transformation is built on trustworthy AI and capabilities in data, processes and skills,” said Mr. Dabaghi.


Dulling the impact of AI-fueled cyber threats with AI

Organizations that wish to curb the burgeoning impact of AI on their cyber risks need to be particularly vigilant while taking advantage of the abilities of AI to stem this tide of attacks. With AI capable of analyzing vast amounts of data, it can detect anomalies across their operations, such as spikes in network traffic, unusual user activities, and even suspicious mail. This approach also reduces the time taken for companies to respond to attacks. Automation, too, can be applied to processes such as cyber threat hunting and vulnerability assessments while rapidly mitigating potential damage in the event of a cyberattack. Moreover, AI can reduce false positives more effectively than rule-based security systems. Contextualizing patterns and identifying potential threats can minimize alert fatigue and optimize the use of resources. Organizations can even take pre-emptive steps to stop future attacks before they happen with AI’s predictive capabilities. AI can also personalize training for employees more vulnerable to social engineering attacks. Then there’s reinforcement learning, a type of machine learning model that trains algorithms to make effective cybersecurity decisions. 


6 Essential Components of a Successful Security ‘Rewards Program’ for Software Developers

To effectively gauge developers’ security capabilities, evaluations should extend beyond training and skill assessments to analyze their behavior during code production. With these benchmarks in place, consider the following questions: How many mistakes are developers still making? Are they learning from their mistakes and fixing security bugs? Are they coaching peers to develop codes securely? Do they conduct peer review pull reviews for security flaws? ... We understand that developer teams are under pressure to produce better code faster. As a result, they may view security as a barrier to innovation, leading them to take shortcuts or ignore vulnerabilities entirely. To evaluate the current security culture and the mentorship provided to developers, it is important to assess not only whether they are coaching their peers but also the depth and effectiveness of their guidance and how it impacts their own security practices. By establishing a baseline to verify developers’ secure coding skills and measurement, security teams will get a clear sense of how well they are producing secure code from the beginning. 


Angular’s Approach to Partial Hydration

Janiuk noted there was a lot of confusion about what hydration actually means, so she began by defining it. “It is a server-side rendering initial load optimization for web apps,” she told the audience. She then walked through what actually happens during hydration. “We’ve got a little happy web server here, and that web server has your application on it,” she said. “That web server is like, ‘Great, I’m going to render that out,’ but what it actually just does is it generates some DOM nodes.” The DOM nodes end up being just a string that is passed off to the client browser, which render the HTML, she continued. ... The hydration process is essentially causing the browser to load the application. “It’s the meshing together of the DOM that was rendered by your web server and the application waking up and identifying what that DOM is — that’s the process of hydration, remeshing together your application code with the DOM,” she said. Rather than fully hydrate the application immediately, partial hydration allows developers to identify portions of their application — maybe a footer or something that a user will not immediately need to see — and rather than ship all of the JavaScript in the app, it “hydrates” only the parts that are needed immediately.


Overconfidence in Cybersecurity: A Hidden Risk

Overconfidence in cybersecurity is a serious and often overlooked risk. Too many companies believe that investing in the latest tools and hiring top talent guarantees safety. But it doesn't. Without constantly adapting your strategy, even the best technology won’t protect you. The greatest danger might not come from hackers, but from your own false sense of security. It’s easy to think that spending millions on sophisticated tools will keep threats at bay. The more rigid your approach, the more exposed you become. Cyber threats evolve constantly -- if you don’t keep up, you’re inviting risk. ... As threats grow to be more sophisticated, companies are doubling down on technology to defend themselves. The more you rely on tools without oversight, the more exposed you become. Don’t assume you’re safe just because you’ve invested heavily in security. By streamlining, auditing, and focusing on the human element, you can avoid the pitfalls of overconfidence. In cybersecurity, confidence should come from having the right processes and people -- not just the latest tools. By following these steps and learning from cases like Uber, you’ll strengthen your defenses and avoid the dangers of overconfidence. It’s not about having more tech -- it’s about using it effectively.


4 Key Reasons to Build a Data Culture

Building a data culture within an organization fosters numerous benefits that can significantly enhance organizational development. A data-driven environment encourages informed decision-making by leveraging accurate and timely information. This leads to more strategic planning and problem-solving, as decisions are based on empirical evidence rather than intuition or anecdotal experiences. Consequently, this reduces risks and increases the likelihood of successful outcomes. ... By leveraging data analytics, companies can extract valuable insights from vast amounts of raw data, enabling them to make informed decisions that drive growth and efficiency. Business intelligence (BI) goes a step further by transforming these insights into actionable strategies that align with the company’s objectives. ... Leveraging a robust data culture for strategic planning and performance improvement is pivotal in today’s competitive landscape. By fostering a culture where data is integral to decision-making processes, businesses can systematically analyze trends, forecast outcomes, and identify potential challenges before they escalate. 


Exploring the Transformative Potential of AI in Cybersecurity

AI-powered systems can monitor network traffic in real-time, automatically identifying and prioritizing potential threats. These systems can correlate data from multiple sources, providing a holistic view of the security landscape and enabling faster, more informed decision-making. AI can automate the process of threat intelligence gathering and analysis. By continuously scanning the dark web, hacker forums and other sources, AI systems can provide up-to-date intelligence on emerging threats, attack techniques, and vulnerabilities. This real-time intelligence allows security teams to proactively update defenses and patch vulnerabilities before they can be exploited. Perhaps the most exciting potential of AI in cybersecurity lies in its predictive capabilities. By analyzing historical data and current trends, AI systems can forecast potential future attacks and vulnerabilities. ... While the potential of AI in cybersecurity is immense, it’s not without challenges. AI systems are only as good as the data they’re trained on, and ensuring the quality and diversity of training data is crucial. There’s also the risk of adversarial AI, where attackers use AI to evade detection or launch more sophisticated attacks.


Connected Vehicles and Data Privacy & Sovereignty in the Global South

In addition to data privacy, the rise of connected vehicles raises concerns about data sovereignty. Data sovereignty refers to the handling and control of data in line with a country's legal frameworks, practices, cultural norms, and laws, including those related to data protection, competition, and national security. It may involve ensuring that countries retain “control” over their residents’ and government data; consequently, relevant policies may include conditions on data transfers and restrictions on reliance on foreign technology that could lead to data being stored overseas. The presence of foreign-connected vehicles roaming a country’s streets raises digital sovereignty concerns. Many experts and scholars push back on equating digital sovereignty with other threats to a nation’s sovereignty. For example, Chander and Sun argue that European concerns regarding the dominance of large platforms are “misplaced.” “It is like arguing that because people drive Toyota cars on U.S. roads, we no longer control our streets. As long as the cars are regulated by local law, the fact that they might be built abroad should not undermine sovereignty,” they contend. However, with connected vehicles now widespread, has this dynamic shifted? 


What Are Hierarchical Security Practices in DevOps?

Adopting hierarchical security practices in DevOps brings several benefits. By integrating security checks at every stage, organizations can ensure a smoother release process and enhance reliability. This approach also encourages collaboration by making security a shared responsibility across development, testing, and operations teams, breaking down silos and fostering a culture of security mindfulness. However, there are challenges to consider. Implementing security measures across all levels demands careful coordination, especially for larger or distributed teams. The initial phase of adopting these practices may slow development as teams adjust to new tools and protocols. Moreover, hierarchical security is resource-intensive, requiring time, training, and investment in appropriate tools. Beyond the technical aspects, there is also a cultural shift required — team members must embrace security as an integral part of their roles, which can sometimes meet resistance. Organizations need to balance these benefits and challenges carefully, tailoring their hierarchical security approach to fit their specific needs, goals, and resources. 



Quote for the day:

"The secret of getting things done is to act!" -- Dante Alighieri

Daily Tech Digest - October 23, 2024

What Is Quantum Networking, and What Might It Mean for Data Centers?

Conventional networks shard data into packets and move them across wires or radio waves using long-established networking protocols, such as TCP/IP. In contrast, quantum networks move data using photons or electrons. It leverages unique aspects of quantum physics to enable powerful new features like entanglement, which effectively makes it possible to verify the source of data based on the quantum state of the data itself. ... Because quantum networking remains a theoretical and experimental domain, it's challenging to say at present exactly how quantum networks might change data centers. What does seem clear, however, is that data center operators seeking to offer full support for quantum devices will need to implement fundamentally new types of network infrastructure. They'll need to deploy infrastructure resources like quantum repeaters, while also ensuring that they can support whichever networking standards might emerge in the quantum space. The good news for the fledgling quantum data center ecosystem is that true quantum networks aren't a prerequisite for connecting quantum computers. It's possible for quantum machines themselves to send and receive data over classical networks by using traditional computers and networking devices as intermediaries.


Unmasking Big Tech’s AI Policy Playbook: A Warning to Global South Policymakers

Rather than a genuine, inclusive discussion about how governments should approach AI governance, what we are witnessing instead is a clash of seemingly competing narratives swirling together to obfuscate the real aspirations of big tech. The advocates of open-source large language models (LLMs) present themselves as civic-minded, democratic, and responsible, while closed-source proponents position themselves as the responsible stewards of secure, walled-garden AI development. Both sides dress their arguments with warnings about dire consequences if their views aren’t adopted by policymakers. ... For years, tech giants have employed scare tactics to convince policymakers that any regulation will stifle innovation, lead to economic decline, and exclude countries from the prestigious digital vanguard. These dire warnings are frequently targeted, especially in the Global South, where policymakers often lack the resources and expertise to keep pace with rapid technological advancements, including AI. Big tech’s polished lobbyists offer what seems like a reasonable solution, workable regulation" — which translates to delayed, light-touch, or self-regulation of emerging technologies. 


AI Agents: A Comprehensive Introduction for Developers

The best way to think about an AI agent is as a digital twin of an employee with a clear role. When any individual takes up a new job, there is a well-defined contract that establishes the essential elements — such as job definition, success metrics, reporting hierarchy, access to organizational information, and whether the role includes managing other people. These aspects ensure that the employee is most effective in their job and contributes to the overall success of an organization. ... The persona of an AI agent is the most crucial aspect that establishes the key trait of an agent. It is the equivalent of a title or a job function in the traditional environment. For example, a customer support engineer skilled in handling complaints from customers is a job function. It is also the persona of an individual who performs this job. You can easily extend this to an AI agent. ... A task is an extension of the instruction that focuses on a specific, actionable item within the broader scope of the agent’s responsibilities. While the instruction provides a general framework covering multiple potential actions, a task is a direct, concrete action that the agent must take in response to a particular user input.


AI in compliance: Streamlining HR processes to meet regulatory standards

With the increasing focus on data protection laws like the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and India’s Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 under the Information Technology Act, 2000, maintaining the privacy and security of employee data has become paramount. The Indian IT Privacy Law mandates that companies ensure the protection of sensitive personal data, including employee information, and imposes strict guidelines on how data must be collected, processed, and stored. AI can assist HR teams by automating data management processes and ensuring that sensitive information is stored securely and only accessed by authorized personnel. AI-driven tools can also help monitor compliance with data privacy regulations by tracking how employee data is collected, processed, and shared within the organization. ... This proactive monitoring reduces the likelihood of non-compliance and minimizes risks associated with data breaches, helping organizations align with both international and domestic privacy laws like the Indian IT Privacy Law.


Are humans reading your AI conversations?

Tools like OpenAI’s ChatGPT and Google’s Gemini are being used for all sorts of purposes. In the workplace, people use them to analyze data and speed up business tasks. At home, people use them as conversation partners, discussing the details of their lives — at least, that’s what many AI companies hope. After all, that’s what Microsoft’s new Copilot experience is all about — just vibing and having a chat about your day. But people might share data that’d be better kept private. Businesses everywhere are grappling with data security amid the rise of AI chatbots, with many banning their employees from using ChatGPT at work. They might have specific AI tools they require employees to use. Clearly, they realize that any data fed to a chatbot gets sent to that AI company’s servers. Even if it isn’t used to train genAI models in the future, the very act of uploading data could be a violation of privacy laws such as HIPAA in the US. ... Companies that need to safeguard business data and follow the relevant laws should carefully consider the genAI tools and plans they use. It’s not a good idea to have employees using a mishmash of tools with uncertain data protection agreements or to do anything business-related through a personal ChatGPT account.


CIOs recalibrate multicloud strategies as challenges remain

Like many enterprises, Ally Financial has embraced a primary public cloud provider, adding in other public clouds for smaller, more specialized workloads. It also runs private clouds from HPE and Dell for sensitive applications, such as generative AI and data workloads requiring the highest security levels. “The private cloud option provides us with full control over our infrastructure, allowing us to balance risks, costs, and execution flexibility for specific types of workloads,” says Sathish Muthukrishnan, Ally’s chief information, data, and digital officer. “On the other hand, the public cloud offers rapid access to evolving technologies and the ability to scale quickly, while minimizing our support efforts.” Yet, he acknowledges a multicloud strategy comes with challenges and complexities — such as moving gen AI workloads between public clouds or exchanging data from a private cloud to a public cloud — that require considerable investments and planning. “Aiming to make workloads portable between cloud service providers significantly limits the ability to leverage cloud-native features, which are perhaps the greatest advantage of public clouds,” Muthukrishnan says.


DevOps and Cloud Integration: Best Practices

CI/CD practices are crucial for DevOps implementation with cloud services. Continuous integration regularly merges code changes into a shared repository, where automated tests are run to spot issues early. On the other hand, continuous deployment improves this practice by automatically deploying changes (once they pass tests) to production. The CI/CD approach can accelerate the release cycle and enhance the overall quality of the software. ... Infrastructure as Code (IaC) empowers teams to oversee and provision infrastructure via code rather than manual processes. This DevOps methodology guarantees uniformity across environments and facilitates infrastructure scalability in cloud-based settings. It represents a pivotal element in transforming any enterprise's DevOps strategy. ... According to DevOps experts(link is external), security needs to be a part of every step in the DevOps process, called DevSecOps. This means adding security checks to the CI/CD pipeline, using security tools for the cloud, and always checking for security issues. DevOps professionals usually stress how important it is to tackle security problems early in the development process, called "shifting left."


Data Resilience & Protection In The Ransomware Age

Backups are considered the primary way to recover from a breach, but is this enough to ensure that the organisation will be up and running with minimal impact? Testing is a critical component to ensuring that a company can recover after a breach and provides valuable insight into the steps that the company will need to take to recover from a variety of scenarios. Unfortunately, many organisations implement measures to recover but fail on the last step of their resilience approach, namely testing. Without this step, they cannot know if their recovery strategy is effective. Testing is a critical component as it provides valuable insight into the steps it needs to take to recover, what works, and what areas it needs to focus on for the recovery process, the amount of time it will take to recover the files and more. Without this, companies will not know what processes to follow to restore data following a breach, as well as timelines to recovery. Equally, they will not know if they have backed up their data correctly before an attack if they have not performed adequate testing. Although many IT teams are stretched and struggle to find the time to do regular testing, it is possible to automate the testing process to ensure that it occurs frequently.


Is data gravity no longer centered in the cloud?

The need for data governance and security is escalating as AI becomes more prevalent. Organizations are increasingly aware of the risks associated with cloud environments, especially regarding regulatory compliance. Maintaining sensitive data on premises allows for tighter controls and adherence to industry standards, which are often critical in AI applications dealing with personal or confidential information. The convergence of these factors signals a broader reevaluation of cloud-first strategies, leading to hybrid models that balance the benefits of cloud computing with the reliability of traditional infrastructures. This hybrid approach facilitates a tailored fit for various workloads, optimizing performance while ensuring compliance and security. ... Data can exist on any platform, and accessibility should not be problematic regardless of whether data resides on public clouds or on premises. Indeed, the data location should be transparent. Storing data on-prem or with public cloud providers affects how much an enterprise spends and the data’s accessibility for major strategic applications, including AI. Currently, on-prem is the most cost-effective AI platform—for most data sets and most solutions. 


Choosing Between Cloud and On-Prem MLOps: What's Best for Your Needs?

The big benefit of cloud MLOps is the availability of virtually unlimited quantities of CPU, memory, and storage resources. Unlike on-prem environments, where resource capacity is limited by the amount of servers available and the resources each one provides, you can always acquire more infrastructure in the cloud. This makes cloud MLOps especially beneficial for ML use cases where resource needs vary widely or are unpredictable. ... On-prem MLOps may also offer better performance. On-prem environments don't require you to share hardware with other customers (which the cloud usually does), so you don't have to worry about "noisy neighbors" slowing down your MLOps pipeline. The ability to move data across fast local network connections can also boost on-prem MLOps performance, as can running workloads directly on bare metal, without a hypervisor layer reducing the amount of resources available to your workloads. ... You could also go on, under a hybrid MLOps approach, to deploy your model either on-prem or in the cloud depending on factors like how many resources inference will require. 



Quote for the day:

"You'll never get ahead of anyone as long as you try to get even with him." -- Lou Holtz

Daily Tech Digest - October 22, 2024

GenAI surges in law firms: Will it spell the end of the billable hour?

All areas of law will use genAI, according to Joshua Lenon, Clio’s Lawyer in Residence. That’s because AI content generation and task automation tools can help the business side and practice efforts of law firms. However, areas that have repetitive workflows and large document volumes – like civil litigation – will adopt genAI e-discovery tools more quickly. Practice areas that charge exclusively flat fees – like traffic offenses and immigration – are already the largest adopters of genAi. ... Nearly three-quarters of a law firm’s hourly billable tasks are exposed to AI automation, with 81% of legal secretaries’ and administrative assistants’ tasks being automatable, compared to 57% of lawyers’ tasks, according a survey of both legal professionals (1,028) and another adults (1,003) in the U.S. general population, by Clio. Hourly billing has long been the preference of many professionals, from lawyers to consultants, but AI adoption is upending this model where clients are charged for the time spent on services. ... People have been talking about the demise of the billable hour for about 30 years “and nothing’s killed it yet,” said Ryan O’Leary, research director for privacy and legal technology at IDC. “But if anything will, it’ll be this.”


IT security and government services: Balancing transparency and security

For cyber defenses, government IT leaders should invest in website hosting services with Secure Sockets Layer (SSL) encryption, and further enhancing security with HTTP Strict Transport Security (HSTS). These measures ensure that all data exchanged via government sites is encrypted, protecting resident self-service features such as online voter registration, permit submissions, utility bill payments, and more. By enforcing HSTS, websites are also protected from protocol downgrade attacks and cookie hijacking, ensuring that all connections remain secure, and reducing the risk of data interception. Other marks of a reliable website hosting solution provider include DDoS mitigation coverage and reliability around regular software patching and updates. For all digital partners, it’s essential to consider third-party risk. Some of the most valuable information residents should be able to access – meeting minutes, agendas, and other documents pertaining to local governing decisions – are hosted by document management vendors. To ensure this access is secure, each vendor must be vetted on its security capabilities, so that critical data is always protected, and hackers are not able to prevent access for residents or laterally move further into government networks.


Software buying trends are changing: From SaaS to outcome as a service

The last decade saw the rise of Software-as-a-Service (SaaS), transforming how businesses approached software deployment. This decade belongs to Outcomes-as-a-Service. CIOs are no longer interested in building large internal developer teams or experimenting with different platforms. They seek business impacting solutions with tangible outcomes that drive business success. Business teams need solutions that deliver results today, not tomorrow. ... AI-powered hyperautomation combines generative AI, BPM, RPA, integrations, analytics, and app-building to drive end-to-end outcomes. In today’s dynamic business environment, an integrated approach is essential. Siloed automation with narrowly focused platforms is no longer sufficient. ... AI-platforms excel in delivering outcomes at speed and scale. Leveraging automation expertise, they ensure outcomes linked to growth, efficiency, and compliance. The platform implements continuous cycles of process mining, implementation, adoption, and solution refinement until desired objectives are met.They also offer a comprehensive solution, managing everything from process definition and refinement to platform implementation, support, application development, and adoption. 


How Retailers Are Using Tech for Competitive Advantage

“While technology can streamline operations, an overreliance on automation without human touch can sometimes backfire,” Peters says. “Consumers still value human interaction, especially in complex support scenarios. It’s crucial for retailers to balance automation with human agents, particularly in areas that require empathy and nuanced decision-making.” ... Companies of all sizes benefit from greater organizational efficiency, and tech has been the fuel powering digital transformation. For example, Lowes uses AR for home improvement shopping while Sephora uses it for virtual make up try-ons. Walmart is stepping up automation in its battle against Amazon. But smaller retailers are benefiting, too. ... “One of our customer’s last large-scale automation took them five years from the time they started the concept to deployment,” Naslund says. “For context, the pandemic, was four and a half years, and the amount of volatility that the supply chain saw over the four years was insane. We saw inventory gluts, inventory shortages, and panic buying. Then you saw a warehouse shortage capacity, everybody's panicking to get warehouses. Then, they suddenly have too much space.”


Why and How IT Leaders Can Embrace the AI Revolution

AI software certainly has some consequences for IT departments. There may be some new types of workflows to manage, new user requests to support, and new application deployments to track. But unless your business is actually building complex AI solutions from scratch — which it probably isn't or shouldn't because sophisticated, mature AI tools and services are available from external vendors, complete with support plans and SLAs — implementing AI is not actually that challenging. That's because most third-party AI solutions boil down to SaaS apps that work just like any other SaaS: The vendor builds, manages, and supports them, with few resources and little effort necessary on the part of customers' IT departments. From the perspective of IT, implementing AI isn't all that different from implementing any other type of software. ... For IT, there are really not any novel data privacy or security risks at stake here. The app ingests financial data, but so do plenty of non-AI applications. IT's responsibility when it comes to managing data security for this type of app boils down to vetting the vendor by reviewing its data management and compliance practices. The fact that the app uses AI doesn't change this process.


Has the time come for integrated network and security platforms?

Interest in platformization is growing among enterprises, asserts Extreme Networks, which recently surveyed 200 CIOs and senior IT leaders for its research, CIO Insights Report: Priorities and Investment Plans in the Era of Platformization. ... A platform that helps organizations transition their network to the cloud to streamline IT efficiency and lower total cost of ownership is important, respondents said. In addition, 55% of respondents emphasized the need to integrate from a broad ecosystem of networking and security offerings, indicating a clear demand for unified platforms, Extreme concluded. ... “The message I got from the survey was that customers are operating in a world where there’s a massive proliferation of products, or applications, and that’s really translating into complexity. Complexity is equal to risk, and that complexity is happening in multiple places,” said Extreme Networks CTO Nabil Bukhari. Complexity is an interesting topic because it changes, Bukhari said. The first Ford cars were basically just an engine with brakes, but they were complicated to start and drive. “Now, if you look at a car, they are like data centers on wheels. But driving and owning them is exponentially easier,” Bukhari said.


How legacy IT systems can hold your business back

While legacy IT systems may still be functional, they can hold a business back from reaching its full potential – especially if market competitors are busy upgrading their own systems. Companies need to carefully evaluate the costs and benefits of keeping legacy systems in place and develop a plan to modernize their IT infrastructure. Investing in a modern data center solution can, over time, improve business agility, security, and your organization’s bottom line. ... This is especially true when it comes to next-generation applications using LLMs and machine learning (ML) for AI-dependent applications. Enterprise servers, storage and networking hardware, and software manufactured before about 2016 were not designed with scaled-up data workloads in mind – especially workloads for genAI, which just started to take off in 2021. This can hinder growth and force companies to invest in additional hardware or software just to maintain their current operations. Legacy systems are also more prone to failures and outages due to aging hardware and software. This downtime disrupts operations and leads to lost revenue, especially for critical business functions. Additionally, data loss from system crashes can be costly to recover from.


Architecture Inversion: Scale by Moving Computation, Not Data

Now why should the rest of us care, blessed as we are with a lack of most of the billions of users TikTok, Google and the likes are burdened with? A number of factors are becoming relevant:ML algorithms are improving and so is local compute capacity, meaning fully scoring items gives a larger boost in quality and ultimately profit than used to be the case. With the advent of vector embeddings, the signals consumed by such algorithms have grown by one to two orders of magnitude, making the network bottleneck more severe. Applying ever more data to solve problems is increasingly cost effective, which means more data needs to be rescored to maintain a constant quality loss. As the consumers of data from such systems move from being mostly humans to mostly LLMs in RAG solutions, it becomes beneficial to deliver larger amounts of scored data faster in more applications than before. ... For these reasons, the scaling tricks of the very biggest players are becoming increasingly relevant for the rest of us, which has led to the current proliferation of architecture inversion, going from traditional two-tier systems where data is looked up from a search engine or database and sent to a stateless compute tier to inserting that compute into the data itself.


The secret to successful digital initiatives is pretty simple, according to Gartner

As with all technologies, seeing results from AI comes down to focusing like a laser beam on the problem at hand: "In my experience, the businesses that start with a real use case and problem are seeing an ROI," Julian LaNeve, chief technology officer at Astronomer, a data platform company, told ZDNET. "They define a well-scoped, impactful problem and use gen AI to solve [it], and it's easy to measure success and ROI. The most successful business cases identify how to solve a problem that the business already cares deeply about and [will] deliver additional value to customers." Technology maturity also makes a difference in success rates. "Previous generations of AI were narrower in scope but have been successful," said Dominic Sartorio, vice president at Denodo, a data management provider. "AI is helping with predictive maintenance of manufactured goods, predicting demand spikes in [the] markets, and finding the optimal routes for logistics, and [has] been successful for many years." Furthermore, according to Gartner, companies that treat their digital initiatives in a collaborative fashion -- between business and IT leaders -- rather than leaving all things digital up to their IT departments are successful with technology. 


Showing AI users diversity in training data can boost perceived fairness and trust

The work investigated whether displaying racial diversity cues—the visual signals on AI interfaces that communicate the racial composition of the training data and the backgrounds of the typically crowd-sourced workers who labeled it—can enhance users' expectations of algorithmic fairness and trust. Their findings were recently published in the journal Human-Computer Interaction. AI training data is often systematically biased in terms of race, gender and other characteristics, according to S. Shyam Sundar, Evan Pugh University Professor and director of the Center for Socially Responsible Artificial Intelligence at Penn State. "Users may not realize that they could be perpetuating biased human decision-making by using certain AI systems," he said. Lead author Cheng "Chris" Chen, assistant professor of communication design at Elon University, who earned her doctorate in mass communications from Penn State, explained that users are often unable to evaluate biases embedded in the AI systems because they don't have information about the training data or the trainers. "This bias presents itself after the user has completed their task, meaning the harm has already been inflicted, so users don't have enough information to decide if they trust the AI before they use it," Chen said



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - October 21, 2024

Choosing the Right Tech Stack: The Key to Successful App Development

Choosing the right tech stack is critical because the tech stack you opt to use will shape virtually every aspect of your development project. It determines which programming language you can use, as well as which modules, libraries, and other pre-built components you can take advantage of to speed development. It has implications for security, since some tech stacks are easier to secure than others. It influences the application performance and operating cost because it plays an important role in determining how many resources the application will consume. And so on. ... Building a secure application is important in any context. But if you face special compliance requirements — for example, if you're building a finance or healthcare app, which are subject to special compliance mandates in many places — you may need to guarantee an extra level of security. To that end, make sure the tech stack you choose offers whichever level of security controls you need to meet your compliance requirements. A tech stack alone won't guarantee that your app is compliant, but choosing the right tech stack makes it easier for you to build a compliant app.


What is hybrid AI?

Rather than relying on a single method, hybrid AI integrates various systems, such as rule-based symbolic reasoning, machine learning and deep learning, to create systems that can reason, learn, and adapt more effectively than AI systems that have not been integrated with others. ... Symbolic AI, which is often referred to as rule-based AI, focuses on using logic and explicit rules to solve problems. It excels in reasoning, structured data processing and interpretability but struggles with handling unstructured data or large-scale problems. Machine learning (ML), on the other hand, is data-driven and excels at pattern recognition and prediction. It works well when paired with large datasets, identifying trends without needing explicit rules. However, ML models are often difficult to interpret and may struggle with tasks requiring logical reasoning. Hybrid AI that combines symbolic AI with machine learning makes the most of the reasoning power of symbolic systems as well as the adaptability of machine learning. For instance, a system could use symbolic AI to follow medical guidelines for diagnosing a patient, while machine learning analyses patient records and test results to offer individual recommendations.


6 Roadblocks to IT innovation

Innovation doesn’t happen by happenstance, says Sean McCormack, a seasoned tech exec who has led innovation efforts at multiple companies. True, someone might have an idea that seemingly comes out of the blue, but that person needs a runway to turn that inspiration into innovation that takes flight. That runway is missing in a lot of organizations. “Oftentimes there’s no formal process or approach,” McCormack says. Consequently, inspired workers must try to muscle through their bright ideas as best they can; they often fail due to the lack of support and structure that would bring the money, sponsors, and skills needed to build and test it. “You have to be purposeful with how you approach innovation,” says McCormack, now CIO at First Student, North America’s largest provider of student transportation. ... Taking a purposeful approach enables innovation in several ways, McCormack explains. First, it prioritizes promising ideas and funnels resources to those ideas, not weaker proposals. It also ensures promising ideas get attention rather than be put on a back burner while everyone deals with day-to-day tasks. And it prevents turf wars between groups, so, for example, a business unit won’t run away with an innovation that IT proposed.


Cyber Criminals Hate Cybersecurity Awareness Month

In the world of enterprises, the expectations for restoring data and backing up data at multi-petabyte scale have changed. IT teams need to increase next-generation data protection capabilities, while reducing overall IT spending. It gets even more complicated when you consider all the applications, databases, and file systems that generate different types of workloads. No matter what, the business needs the right data at the right time. To deliver this consistency, the data needs to be secured. Next-generation data protection starts when the data lands in the storage array. There needs to be high reliability with 100% availability. There also needs to be data integrity. Each time data is accessed, the storage system should check and verify the data to ensure the highest degree of data integrity. Cyber resilience best practices require that you ensure data validity, as well as near-instantaneous recovery of primary storage and backup repositories, regardless of the size. This accelerates disaster recovery when a cyberattack happens. Greater awareness of best practices in cyber resilience would be one of the crowning achievements of this October as Cybersecurity Awareness Month. Let’s make it so.


6 Strategies for Maximizing Cloud Storage ROI

Rising expenses in cloud data storage have prompted many organizations to reconsider their strategies, leading to a trend of repatriation as enterprises seek more control during these unpredictable economic times. A February 2024 Citrix poll revealed that 94% of organizations had shifted some workloads back to on-premises systems, driven by concerns over security, performance, costs, and compatibility. ... Common tactics of re-architecting applications, managing cloud sprawl and monitoring spend using the tools each cloud provides are a great first start. However, these methods are not the full picture. Storage optimization is an integral piece. Focusing on cloud storage costs first is a smart strategy since storage constitutes a large chunk of the overall spend. More than half of IT organizations (55%) will spend more than 30% of their IT budget on data storage and backup technology, according to our recent State of Unstructured Data Management report. The reality is that most organizations don’t have a clear idea on current and predicted storage costs. They do not know how to economize, how much data they have, or where it resides. 


As Software Code Proliferates, Security Debt Becomes a More Serious Threat

As AI-generated code proliferates, it compounds an already common problem, filling code bases with insecure code that will likely become security debt, increasing the risks to organizations. Just like financial debt, security debt can accrue quickly over time, the result of organizations compromising security measures in favor of convenience, speed or cost-cutting measures. Security debt, introduced by both first-party and third-party code, affects organizations of all sizes. More than 70% of organizations have security debt ingrained in their systems — and nearly half have critical debt. Over time, this accumulated debt poses serious risks because, as with financial debt, the bill will become due — potentially in the form of costly and consequential security breaches that can put an organization's data, reputation and overall stability at stake. ... Amid the dark clouds gathering over security debt, there is one silver lining. The number of high-severity flaws in organizations has been cut in half since 2016, which is clear evidence that organizations have made some progress in implementing secure software practices. It also demonstrates the tangible impact of quickly remediating critical security debt.


Why Liability Should Steer Compliance with the Cyber Security and Resilience Bill

First and foremost, the regulations are likely to involve an overhaul that will require a management focus. In the case of NIS2, for example, the board is tasked with taking responsibility for and maintaining oversight of the risk management strategy. This will require management bodies to undergo training themselves as well as to arrange training for their employees in order to equip themselves with sufficient knowledge and skills to identify risks and assess cybersecurity risk management practices. Yet NIS2 also breaks new ground in that it not only places responsibility for oversight of the risk strategy firmly at the feet of the board but goes on to state individuals could be held personally liable if they fail to exercise those responsibilities. Under article 32, authorities can temporarily prohibit any person responsible for discharging managerial responsibilities at CEO or a similar level from exercising managerial functions – in other words they can be suspended from office. We don’t know if the Cyber Security and Resilience Bill will take a similar tack but NIS2 is by no means alone in this approach. 


Tackling operational challenges in modern data centers

Supply chain bottlenecks continue to plague data centers, as shortages of critical components and materials lead to delays in shipping, sliding project timelines, and increased costs for customers. Many data center operators have become unable to meet their need for affected equipment such as generators, UPS batteries, transformers, servers, building materials, and other big-ticket items. This gap in availability is leading many to settle for any readily available items, even if not from their preferred vendor. ... The continuous heavy power consumption of data centers can strain local electrical utility systems with limited supply or transmission capacity. This poses a question of whether areas heavily populated with data centers, like Northern Virginia, Columbus, and Pittsburgh, have enough electricity capacity, and if they should only be permitted to use a certain percentage of grid power. ... Like the rest of the world, data centers are now facing a climate crisis as temperatures and weather events soar. Data centers are also seeking ways to increase their power load and serve higher client demand, without significantly increasing their electricity and emissions burdens. 


The AI-driven capabilities transforming the supply chain

In today’s supply chain environment, there really is no room for disruption — be it labor shortages, geopolitical strife or malfunctions within manufacturing. To keep up with demand, supply chain teams are focused on continuous improvement and finding ways to remove the burden on expensive manual labor in favor of automated, digital solutions. When faulty products come off the production line, it must be addressed quickly. AI can accelerate the resolution process faster than human labor in many instances — preventing production standstills and even catching errors before they occur. Engineers who are creating a product can lean on these insights too, using AI to assess all the errors that have happened in the past to make sure that they don’t happen in the future. ... Through camera footage and visual inspections, AI models can help detect errors, faults or defects in equipment before they happen. If the technology identifies an issue — or predicts the need for maintenance — teams can arrange for a technician to perform repairs. This predictive maintenance minimizes unplanned outages, reduces disruptions across the supply chain and optimizes asset performance.


What makes a great CISO

Security settings were once viewed as binary — on or off — but today, security programs need to be designed to help organizations adapt and respond with minimal impact when incidents occur. Response and resilience planning now involves cybersecurity and business operations teams, requiring the CISO to engage across the organization, especially during incidents. ... In the past, those with a SecOps background often focused on operational security, while those with a GRC background leaned toward prioritizing compliance to manage risk, according to Paul Connelly, former CISO now board advisor, independent director and CISO mentor. “Infosec requires a base competence in technology, but a CISO doesn’t have to be an engineer or developer,” says Connelly. A broad understanding of infosec responsibilities is needed, but the CISO can come from any part of the team, including IT or even internal audit. Exposure to different industries and companies brings a valuable diversity of thinking. Above all, modern CISOs must prioritize aligning security efforts with business objectives. “Individuals who have zig-zagged through an organization, getting wide exposure, are better prepared than someone who rose through the ranks focused in SecOps or another single area of focus,” says Connelly.



Quote for the day:

"The great leaders are like best conductors. They reach beyond the notes to reach the magic in the players." -- Blaine Lee

Daily Tech Digest - October 20, 2024

6 Strategies for Overcoming the Weight of Process Debt

While technical debt is a more familiar concept stemming from software development that describes the cost of taking shortcuts or using quick fixes in code, process debt relates to inefficiencies and redundancies within organizational workflows and procedures. Process debt can also have far-reaching effects that are often less obvious to business leaders, making it an insidious force that can silently undermine business operations. ... Rather than simply adding a new technology into an old process or duplicating legacy steps in a new application, organizations need to undertake a detailed audit of existing processes to uncover inefficiencies, redundancies, and inaccuracies that contribute to process debt. This audit should involve a systematic review of all workflows, procedures, and operational activities to identify areas where performance is falling short or where resources are being wasted. To gain a deeper understanding, leverage process mapping tools to create visual representations of workflows. These tools allow you to document each step of a process, highlight how tasks flow between different departments or systems, and uncover hidden bottlenecks or points of friction.


Domain-specific GenAI is Coming to a Network Near You

Now, we're seeing domain-specific models crop up. These are specialized models that focus on some industry or incorporate domain best practices that can be centrally trained and then deployed and fine-tuned by organizations. They are built on specific knowledge sets rather than the generalized corpus of information on which conversational AI is trained. ... By adopting domain-specific generative AI, companies can achieve more accurate and relevant outcomes, reducing the risks associated with general-purpose models. This approach not only enhances productivity but also aligns AI capabilities with specific business needs. ... The question now is whether this specialization can be applied to domains like networking, security, and application delivery. Yes, but no. The truth is that predictive (classic) AI is going to change these technical domains forever. But it will do so from the inside-out; that is, predictive AI will deliver real-time analysis of traffic that enables an operational AI to act. That may well be generative AI if we are including agentic AI in that broad category. But GenAI will have an impact on how we operate networking, security, and application delivery. 


The human factor: How companies can prevent cloud disasters

A company’s post-mortem process reveals a great deal about its culture. Each of the top tech companies require teams to write post-mortems for significant outages. The report should describe the incident, explore its root causes and identify preventative actions. The post-mortem should be rigorous and held to a high standard, but the process should never single out individuals to blame. Post-mortem writing is a corrective exercise, not a punitive one. If an engineer made a mistake, there are underlying issues that allowed that mistake to happen. Perhaps you need better testing, or better guardrails around your critical systems. Drill down to those systemic gaps and fix them. Designing a robust post-mortem process could be the subject of its own article, but it’s safe to say that having one will go a long way toward preventing the next outage. ... If engineers have a perception that only new features lead to raises and promotions, reliability work will take a back seat. Most engineers should be contributing to operational excellence, regardless of seniority. Reward reliability improvements in your performance reviews. Hold your senior-most engineers accountable for the stability of the systems they oversee.


Ransomware siege: Who’s targeting India’s digital frontier?

Small and medium-sized businesses (SMBs) are often the most vulnerable. This past July, a ransomware attack forced over 300 small Indian banks offline, cutting off access to essential financial services for millions of rural and urban customers. This disruption has severe consequences in a country where digital banking and online financial services are becoming lifelines for people’s day-to-day transactions. According to a report by Kaspersky, 53% of Indian SMBs experienced ransomware attacks in 2023, with 559 million attacks occurring between April and May of this year, making them the most targeted segment. ... For SMBs, the cost of paying ransomware, retrieving proprietary data, returning to full operations, and recovering lost revenue can be too much to bear. For this reason, many businesses opt to pay the ransom, even when there is no guarantee that their data will be fully restored. The Indian financial sector, in particular, has been a favourite target. This year the National Payment Corporation of India (NPCI), which runs the country’s digital payment systems, was forced to take systems offline temporarily due to an attack. Beyond the financial impact, these incidents erode trust in India’s push for a digital-first economy, impacting the country’s progress toward digital banking adoption.


What AMD and Intel’s Alliance Means for Data Center Operators

AMD and Intel’s alliance was a surprise for many. But industry analysts said their partnership makes sense and is much needed, given the threat that Arm poses in both the consumer and data center space. While x86 processors still dominate the data center space, Arm has made inroads with cloud providers Amazon Web Services, Google Cloud and Microsoft Azure building their own Arm-based CPUs and startups like Ampere having entered the market in recent years. Intel and AMD’s partnership confirms how strong Arm is as a platform in the PC, data center and smartphone markets, the Futurum Group's Newman said. But the two giant chipmakers still have the advantage of having a huge installed base and significant market share. Through the new x86 advisory group, AMD and Intel can benefit by making it easier for data center operators to leverage x86, he said. “This partnership is about the experience of the x86 customer base, trying to make it stickier and trying to give them less reason to potentially move off of the platform is valuable,” Newman said. “x86’s longevity will benefit meaningfully from less complexity and making it easier for customers.”


Cyber resilience is improving but managing systemic risk will be key

“Cyber insurance is recognised as a core component of a robust cyber risk management strategy. While we have seen fluctuations in cyber rates and capacity over the last five years, more recently we have seen rates softening in the market,” Cotelle said. “The emergence and adoption of AI has clear potential to revolutionise how businesses operate, which will create new opportunities but also new exposures. “In the cyber risk context, AI is a double-edged sword. First, it can be exploited by threat actors to conduct more sophisticated attacks between agencies to address ransomware,” he said. ... He stressed, however, that one of the biggest challenges facing the cyber market is how it understands and manages systemic cyber risks. He said there is a case for considering the use of reinsurance pools and public/private partnerships to do this. “The continued attractiveness of the cyber insurance solution is paramount to the sustainability and growth of the market. “In recent years, we have seen work by insurers to clarify particular aspects of coverage relating to areas such as cyber-related property damage, cyber war or infrastructure which has led to coverage restrictions.”


Cyber resilience vs. cybersecurity: Which is more critical?

A common misconception is that cyber resilience means strong cybersecurity and that the organization won’t be compromised because their defenses are impenetrable. No defense is ever 100 percent secure because IT products have flaws and cybercriminals, and nation state-sponsored threat actors are continually changing their tactics, techniques and procedures (TTPs) to take advantage of any weaknesses they can find. And, of course, any organization with cyber resilience still needs quality cyber security in the first place. Resilience isn’t promising that bad things won’t happen; resilience promises that when they do, the organization can overcome that and continue to thrive. Cybersecurity is one of the foundations upon which resilience stands. Although cyber threats have increased in frequency and sophistication in recent years, there’s a huge amount that businesses in every sector can do to reduce the chances of being compromised and to prepare for the worst. The investment in time, energy and resources to prepare for a cyber incident is well worth it for the results you’ll see. Being cyber resilient is becoming a selling point as well. 


Building Digital Resilience: Insider Insights for a Safer Cyber Landscape

These “basics” sound simple and are not difficult to implement, but we (IT, Security teams, and the Business) routinely fail at it. We tend to focus on the fancy new tool, the shiny new dashboard, quarterly profits, or even the latest analytical application. Yes, these are important and have their place, but we should ensure we have the “basics” down to protect the business so it can focus on profit and growth. Using patching as an example, if we can patch our prioritized vulnerabilities promptly, we reduce our threat landscape, which, in turn, offers attackers fewer doors and windows into our environment. The term may seem a little dated, but defense in depth is a solid method used to defend our often-porous environments. Using multiple levels of security, such as strong passwords, multi-factor authentication, resilience training, and patching strategies, makes it harder for threat actors, so they tend to move to another target with weaker defenses. ... In an increasingly digital world, robust recovery capabilities are not just a safety net but a strategic advantage and a tactical MUST. The actions taken before and after a breach are what truly matter to reduce the costliest impacts—business interruption. 


Information Integrity by Design: The Missing Piece of Values-Aligned Tech

To have any chance of fixing our dysfunctional relationship with information, we need solutions that can take on the powerful incentives, integration scale, and economic pull of the attention economy as we know it, and realign the market. One good example is the emerging platform Readocracy, designed from the outset with features that allow users to have much more control and context over their information experience. This includes offering users control over the algorithm, providing nudges to direct attention more mindfully, and providing information on how informed commenters are on subjects on which they are commenting. ... An information integrity by design initiative can focus on promoting the six components of information integrity outlined above so readers and researchers can make informed decisions on the integrity of the information provided. Government promotion and support can drive and support corporate adoption of the concept much like it's done for security by design, privacy by design, and, most recently, safety by design. ... Information integrity deserves fierce advocacy from governments, the intellectual ingenuity of civil society, and the creative muscle of industry. 


The backbone of security: How NIST 800-88 and 800-53 compliance safeguards data centers

When discussing data center compliance, it’s important to not leave out an important player: the National Institute of Standards and Technology (NIST). NIST is one of the most widely recognized and adopted cybersecurity frameworks, is the industry’s most comprehensive and in-depth set of framework controls, and is a non-regulatory federal agency. NIST’s mission is to educate citizens on information system security for all applications outside of national security, including industry, government, academia, and healthcare on both a national and global scale. Their strict and robust standards and guidelines are widely recognized and adopted by both data centers and government entities alike seeking to improve their processes, quality, and security. ... NIST 800-88 covers various types of media, including hard drives (HDDs), solid-state drives (SSDs), magnetic tapes, optical media, and other media storage devices. NIST 800-88 has quickly become the utmost standard for the U.S. Government and has been continuously referenced in federal data privacy laws. More so, NIST 800-88 regulations have been increasingly adopted by private companies and organizations, especially data centers. 



Quote for the day:

"To have long-term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley