Daily Tech Digest - October 28, 2024

Generative AI isn’t coming for you — your reluctance to adopt it is

Faced with a growing to-do list and the new balancing act of returning from maternity leave to an expanded role leading public relations for a publicly-traded tech company, I opened Jasper AI. I admittedly smirked at some of the functionality. Changing the tone? Is this AI emotionally intelligent? Maybe more so than some former colleagues. I began on a blank screen. I started writing a few lines and asked the AI to complete the piece for me. I reveled in the schadenfreude of its failure. It summarized what I had written at the top of the document and just spit it out below. Ha! I had proven my superiority. I went back into my cave, denying myself and my organization the benefits of this transformative technology. The next time I used gen AI, something in me changed. I realized how much prompting matters. You can’t just type a few initial sentences and expect the AI to understand what you want. It still can’t read our minds (I think). But there are dozens of templates that the AI understands. For PR professionals, there are templates for press releases, media pitches, crisis communications statements, press kits and more.


What's Preventing CIOs From Achieving Their AI Goals?

"While no CIO wants to be left behind, they are also prudent about their AI adoption journeys and how they implement the technology for business in a responsible manner," said Dr. Jai Ganesh, chief product officer, HARMAN International. "While there are many business use cases, enterprises are prioritizing these on a must-have immediately to implement basis." ... He also oversees AI implementation across his company. Technology leaders say it will take at least two to three years before AI becomes mainstream across the enterprise. Rakesh Jayaprakash, chief analytics evangelist at ManageEngine, told ISMG that we would start to see "very tangible results" at a larger scale in another one or two years. "Tangible results" refer to commoditization of AI, which accelerates the ROI, he said. "While there is a lot of hype around AI now, the true value comes when the organizations are able to see the outcomes," Jayaprakash said. "Right now, many organizations jump in with very high expectations of what is possible through AI, because we've started to use tools such as ChatGPT to accomplish very simple tasks. But when it comes to organization-level use cases, those are a little more complex."


Bridging the Data Gap: The Role of Industrial DataOps in Digital Transformation

One of the main issues faced by organizations is the lack of context in industrial data. Unlike IT systems, where data is typically well-defined and structured, data from industrial environments often lacks the necessary context to be useful. For example, a temperature reading from a manufacturing machine might be labeled simply as “temperature sensor 1,” leaving operators to guess its relevance without proper identification. This lack of contextualization—when applied to thousands of data points across multiple facilities— Is a major barrier to advanced analytics and digitalization programs. ... By implementing Industrial DataOps, organizations can address this gap by contextualizing data as close to the source as possible—ideally at the edge of the network. This approach empowers operators who have tribal knowledge of the data and its sources to deliver ready-to-use data to IT and line of business users in their organization. Decisions become faster and more informed. The ultimate goal is to transform raw data into valuable insights that drive operational improvements. ... As organizations adopt Industrial DataOps, they unlock the potential for rapid innovation. With a solid data management framework in place, OT teams can easily explore new use cases and validate hypotheses. 


Ensuring AI-readiness of Data Is a Long-term Commitment

Data becomes an intellectual property when one enters the world of GenAI, and it is the way with which one can customize algorithms to reflect the brand voice and deliver great client services. Keeping the scenario in mind, Birkhead states that modernizing data and ensuring its AI-readiness is a long-term commitment. While organizations can make incremental progress year after year, building an analytic factory to produce AI models that support the business takes strategy, investment, and an enabling leadership team. Highlighting JPMC’s data strategy, Birkhead states that the components include data design principles, operating models, principles around platforms, tooling, and capabilities. Additionally, talent, governance, data, and AI ethics also come into play, but the ultimate goal is to have incredibly high-quality data that is self-describing and understandable by both humans and machines. From Birkhead’s standpoint, to be AI-ready with data, organizations have to get data to a state where a data scientist, user, or AI researcher can go into a marketplace and understand everything about the data.


Business Etiquette Classes Boom as People Relearn How to Act at Work

Workers who had substantial professional experience before the pandemic, including managers and executives, still need help adapting to hybrid and remote work, Senning said. He has been coaching leaders on best practices for such things as communicating through your calendar and deciding whether to call, text or use Slack to reach an employee. stablishing etiquette for video meetings has also been a challenge for many firms, he notes. Bad behavior in virtual meetings has occasionally made headlines in recent years, such as the backlash against Vishal Garg, CEO of the mortgage lending firm Better.com, for announcing mass layoffs over Zoom ahead of the holidays in 2021. "If I had a magic button that I could push that could get people to treat video meetings with 50 percent of the same level of professionalism they treat an in-person meeting, I would make a lot of HR, personnel managers, and executives very, very happy," Senning said. Tech companies also are paying for etiquette and professionalism training for their workers, especially if they're bringing in employees who have never worked in person before, according to Crystal Bailey, director of the Etiquette Institute of Washington, who counts Amazon among her clients.


Exploring the Power of AI in Software Development - Part 1: Processes

AI holds the power to significantly enhance the requirement analysis and planning processes at the early stages of the software development life cycle (SDLC). It can analyze massive amounts of data in order to identify user needs and preferences, allowing developers to make informed decisions about features and functionality. ... AI can also look at coding rates per user story within an app architecture context and allow Product Managers to better determine project timelines and resource needs. In doing so, they can more accurately predict the risk-reward of time-to-market versus high quality for every release, knowing that no software will be 100% defect-free. ... With AI, you have a pair programmer who has infinite patience. Someone who will not judge you for seemingly "stupid" questions. Having this kind of support can increase an engineer's capabilities and productivity. So often as a junior engineer, I was afraid to ask the senior engineers on my team questions because I thought I should know the answer. Engineers can use AI without the worry of judgment, so no question is stupid, no answer should be known.


How AI is Shaping the Future of Product Development

Product testing and iteration processes are also being revolutionized by AI, which results in shorter development cycles and better product outcomes as well. While tried and true testing methods can work well, they often have long cycles or may miss problems. Quiet contrary to traditional testing, AI-driven automation suggests a new degree of efficiency and accuracy. AI tools for early-stage testing makes it possible to discover issues quickly and try out potential applications, which lowers the demand on manual resources spent in validating components or debugging. Not just that, AI's ability to analyze code bases comprehensively provides targeted insights for ongoing improvements. By integrating AI into testing processes, businesses can accelerate development cycles, reduce costs, and deliver products that better align with user expectations. ... By embedding AI into their growth strategies, companies can benefit in numerous ways. It allows for more targeted and personalized experience to be delivered, subsequently personalizing the products or services provided by companies. Such a custom-built solution not only enhances user experience but also helps create brand loyalty. Additionally, AI allows companies to have data-driven decision making that facilitates strategic planning and execution.


From Safety to Innovation: How AI Safety Institutes Inform AI Governance

According to the report, this “first wave” of AISIs has three common characteristics:Safety-focus: The first wave of AISIs was informed by the Bletchley AI Safety Summit, which declared that “AI should be designed, developed, deployed, and used in a manner that is safe, in such a way as to be human-centric, trustworthy, and responsible.” These institutes are particularly concerned with mitigating abuse and safeguarding frontier AI models. Government-led: These AISIs are governmental institutions, providing them with the “authority, legitimacy, and resources” needed to address AI safety issues. Their governmental status helps them access leading AI models to run evaluations, and importantly, it gives them greater leverage in negotiating with companies unwilling to comply. Technical: AISIs are focused on attracting technical experts to ensure an evidence-based approach to AI safety. The report also points out some key ways AISIs are unique. For one, AISIs are not a “catch-all” entity to tackle the complex and ever-evolving AI governance landscape. They are also relatively free of the bureaucracy commonly associated with governmental agencies. This may be due to the fact that these institutes have very little regulatory authority and focus more on establishing best practices and conducting safety evaluations to inform responsible AI development.


Current Top Trends in Data Analytics

One of the most impactful data analytics trends right now is the integration of AI and machine learning (ML) into analytics frameworks, observes Anil Inamdar, global head of data services at data monitoring and management firm Instaclustr by NetApp, an online interview. "We are seeing the emergence of a new data 4.0 era, which builds on previous shifts that focused on automation, competitive analytics, and digital transformation," Inamdar states. "This distinct new phase leverages AI/ML and generative AI to significantly enhance data analytics capabilities," he says. While the transformative potential is now here for the taking, enterprises must carefully strategize across several key areas. ... Data governance should be a top concern for all enterprises. "If it isn't yours, you’re heading for a world of hurt," warns Kris Moniz, national data and analytics practice lead for business and technology advisory firm Centric Consulting, via email. Data governance dictates the rules under which data should be managed, Moniz says. "It doesn’t just do this by determining who gets access to what," he notes. "It also does it by defining what your data is, setting processes that can guarantee its quality, building frameworks that align disparate systems across common domains, and setting standards for common data that all systems should consume."


Effective Data Mesh Begins Wtih Robust Data Governance

When implemented correctly, removing the dependency on centralised systems and IT teams can truly transform the way organisations operate. However, introducing a data mesh can also raise fears and concerns relating to storage, duplication, management, and compliance, all of which must be addressed if it is to succeed. With decentralised data management, it’s also critical that everyone follows the same stringent set of rules, particularly regarding the creation, storage, and protection of data. If not, issues will quickly arise. Additionally, if any team leaders or department heads put their own tools or processes in place, the results may cause far more problems than they solve. Trusting individuals to stick to data guidelines is too risky. Instead, adherence should be enforced in a way that ensures standards are followed, without impacting agility or frustrating users. This may sound impractical, but a computational governance approach can impose the necessary restrictions, while at the same time accelerating project delivery. Naturally, not everyone will be quick (or keen) to adjust, but with additional support and training even the most reluctant individuals can learn how to adopt a more entrepreneurial mindset.



Quote for the day:

"Trust is the lubrication that makes it possible for organizations to work." -- Warren G. Bennis

Daily Tech Digest - October 27, 2024

Who needs a humanoid robot when everything is already robotic?

The service sector will see a surge in delivery robots, streamlining last-mile package and food delivery logistics. Advanced cleaning robots will maintain both homes and commercial spaces. urgical robots performing minimally invasive procedures with high precision will benefit healthcare. Rehabilitation robots and exoskeletons will transform physical therapy and mobility, while robotic prosthetics will offer enhanced functionality to those who need them. At the microscopic level, nanorobots will revolutionize drug delivery and medical procedures. Agriculture will increasingly embrace harvesting and planting robots to automate crop management, with specialized versions for tasks like weeding and dairy farming. Autonomous vehicles and drone delivery systems will transform the transportation sector, while robotic parking solutions will optimize urban spaces. Military and defense applications will include reconnaissance drones, bomb disposal robots, and autonomous combat vehicles. Space exploration will continue to rely on advanced rovers, satellite-servicing robots, and assistants for astronauts on space stations. Underwater exploration robots and devices monitoring air and water quality will benefit environmental and oceanic research. 


Cybersecurity Isn't Easy When You're Trying to Be Green

Already, some green energy infrastructure has fallen prey to attackers. Charging stations for electric vehicles typically require connectivity, which makes them vulnerable to both compromise and disruption. In 2022, pro-Ukrainian hacktivists compromised chargers in Moscow to display messages of support for Ukraine. In 2019, a solar firm could no longer manage its 500 megawatts of wind and solar sites in the western US after a denial-of-service attack targeted an unpatched firewall, the FBI stated in a Private Industry Notification (PIN) in July. The risk could extend all the way to homeowners, who increasingly have adopted rooftop solar and need to be connected to be able to deliver their solar power and be credited. "This issue will only become more important as small solar systems continue to grow. When every house is a power plant, every house is a target," Morten Lund, of counsel for Foley & Lardner LLP, wrote in a brief directed at energy companies. "In many ways, the distributed nature of solar energy provides significant protection against catastrophic failures. But without sufficient protection at the project level, this strength quickly becomes a weakness."


A look at risk, regulation, and lock-in in the cloud

The threat here, if indeed it is a threat, is multifaceted. Firstly, financial implications can be significant. When a company heavily invests in a specific vendor’s ecosystem, the costs of migrating to a different provider, both in terms of money and resources, can be prohibitive. The reality is that any technology comes with a certain degree of lock-in. That is why I’m often amazed at enterprises that ask me for zero lock-in in any enterprise technology decision. It just does not exist. The question is how do we minimize the impact of the lock-in that any use of technology brings. This is something I explain extensively to enterprises. The risk is operational; dependencies on proprietary APIs and services might necessitate extensive application rewriting. ... Whether governmental regulation is a boon or a bane is a matter of perspective. On one side, it could enforce fairness, ensuring that no single provider exploits its position to the detriment of customers. Conversely, excessive regulation might stifle innovation and limit the aggressive evolution that characterizes the tech world. Also, we should consider that these regulations exist within one or a few countries, and as enterprises are now mostly international firms, that has less of the chilling effect that most expect.


Biometrics options expand, add more layers to secure financial services

The range of technologies being brought to bear against different fraud vectors also includes Herta’s biometrics being utilized by the EU’s EITHOS project to detect deepfakes, and age assurance and automated border control measures a pair of governments are looking into for contract opportunities. ... Mastercard is rolling out passkeys for payments in the Middle East and North Africa, following their launch in India. Starting with the noon Payments platform in the UAE, the Payment Passkey Service will by offered as a more secure alternative to OTPs at online checkouts. A Washington, D.C.-based think tank says America has a digital verification divide, due to the lack of documents possessed by low-income and marginalized people and the conflation of biometrics for ID verification with surveillance and law enforcement. Login.gov has helped less than it is supposed to so far, but evidence from ID.me suggests that the situation could be improved with biometrics. Panama has introduced a national digital ID and wallet for identity verification to access public and private services online. The digital ID is available to both citizens and permanent residents, and essentially digitizes the national ID card supplied by Mühlbauer and partners. 


AI Won’t Fix Your Software Delivery Problems

You can assess your personal productivity because it’s a feeling rather than a number. You don’t feel productive when dealing with busy work or handling constant interruptions. When you get a solid chunk of time to complete a task, you feel great. If an organization is interested in this kind of productivity, it should check in on employee satisfaction because people tend to be more satisfied when they can get things done. The State of DevOps report confirms this problem, as the high ratings for AI-driven productivity aren’t reducing toil work or improving software delivery performance, which we’ve long held to be a solid way for development teams to contribute to the organization’s goals. ... Given the intense focus on increasing the speed of coding, we’re likely seeing suboptimization on a massive scale. Writing code is rarely the bottleneck for feature development. Speeding up the code itself is less valuable if you aren’t catching the bugs it introduces with automated tests. It also fails to address the broader software delivery system or guarantee your features are useful to users. If you aren’t working at the constraint, your optimizations don’t improve throughput. In many cases, optimizing away from the constraint harms the end-to-end system.


The mainframe’s future in the age of AI

Running AI on mainframes as a trend is still in its infancy, but the survey suggests many companies do not plan to give up their mainframes even as AI creates new computing needs, says Petra Goude ... “AI can be assistive technology,” Dyer says. “I see it in terms of helping to optimize the code, modernize the code, renovate the code, and assist developers in maintaining that code.” ... “Many institutions are willing to resort to artificial intelligence to help improve outdated systems, particularly mainframes,” he says. “AI reduces the burden on several work phases, such as code rewriting or replacing databases, which streamlines the whole upgrading stage.” ... Many organizations have their mission-critical data residing on mainframes, and it may make sense to run AI models where that data resides, Dyer says. In some cases, that may be a better alternative than moving mission-critical data to other hardware, which may not be as secure or resilient, she adds. “You have both your customer data and then you have what I’ll call the operational data on the mainframe,” she says. “I can see the value of being able to develop and run your models directly right there, because you don’t have to move your data, you have very low latency, high throughput, all those things that you would want for certain types of AI applications.” 


How (and why) federated learning enhances cybersecurity

Federated learning’s popularity is rapidly increasing because it addresses common development-related security concerns. It is also highly sought after for its performance advantages. Research shows this technique can improve an image classification model’s accuracy by up to 20% — a substantial increase. ... Once the primary algorithm aggregates and weighs participants’ updates, it can be reshared for whatever application it was trained for. Cybersecurity teams can use it for threat detection. The advantage here is twofold — while threat actors are left guessing since they cannot easily exfiltrate data, professionals pool insights for highly accurate output. Federated learning is ideal for adjacent applications like threat classification or indicator of compromise detection. The AI’s large dataset size and extensive training build its knowledge base, curating expansive expertise. Cybersecurity professionals can use the model as a unified defense mechanism to protect broad attack surfaces. ML models — especially those that make predictions — are prone to drift over time as concepts evolve or variables become less relevant. With federated learning, teams could periodically update their model with varied features or data samples, resulting in more accurate, timely insights.


Augmented Reality's Healthcare Revolution

Many observers believe that AR's most immediate benefit will be in training both current and future healthcare professionals. "AR enables students to interact with virtual content in a real-world setting, providing contextualized learning experiences," Stegman says. Meanwhile, full virtual reality (VR), will offer a completely immersive training environment in which students can practice clinical skills without the risks associated with real patient care. ... As AR begins entering the healthcare mainstream, deep-pocketed large hospitals and specialized medical centers will most likely be the leading adopters, says SOTI's Anand. He reports that his firm's latest healthcare report found that 89% of US healthcare industry respondents agree that artificial intelligence simplifies tasks. "This gives a hint that healthcare organizations are already on the path to integrating advanced technologies," Anand notes. ... AR technology is rapidly evolving, and improvements in hardware (such as AR glasses and headsets), software, and integration with other medical technologies, are rapidly making AR more practical and effective. "As these technologies mature, they will become more accessible and affordable," Reitzel predicts.


Achieving peak cyber resilience

In a non-malicious, traditional disaster incident such as hardware failure or accidental deletion, the backup platform isn’t a target. Recovery is straightforward with a recent backup copy. You can quickly recover right back to the original location or an alternative location. In contrast, a cyberattack maliciously goes after anything and everything, making recovery complex. Backups are an especially attractive target for hackers because they represent an organization’s last line of defense. In a cyberattack scenario, the priority is containing the breach to stop further damage. Forensics teams must pinpoint how the attacker gained entry, find vulnerabilities and malware, and prevent reinfection by diagnosing which systems were potentially affected. Data decontamination is then needed to ensure threats aren’t reintroduced during recovery. Ransomware events can also necessitate coordination across IT disciplines, various business teams, legal, public, investor and government entities. Disaster recovery is likely something your organization deals with only infrequently. ... Cybercriminals have been enjoying the first-mover advantage in putting AI to work for their nefarious purposes. AI tools have allowed them to increase the frequency, speed and scale of their attacks. But now it’s time to fight fire with fire.


Who Are the AI Goliaths in the Banking Industry? A New Index Reveals a Growing Divide

In the Leadership pillar, banks have significantly increased their AI-related communications. The 50 Index banks published over 1,250 references to “AI” across annual reports, press releases, and company LinkedIn posts—representing a 59% increase year-over-year. This increase in “volume” was accompanied by an increase in “substance,” both across Investor Relations materials and in the engagement of Executive leaders across external media, industry conferences, and LinkedIn. As AI investments mature, the pressure is mounting for banks to demonstrate tangible returns. While 26 banks are now reporting outcomes from AI use cases, only 6 are disclosing financial impacts, and just two (DBS and JPMorgan Chase) are attempting to estimate total realized dollar outcomes across all AI investments. JPMorgan Chase, for instance, reported that the value they assign to their AI use cases is between $1 billion to $1.5 billion in fields such as customer personalization, trading, operational efficiencies, fraud detection, and credit decisioning. DBS, on the other hand, reported an economic value of SGD 370 million from its use of AI/ML in 2023, more than double the value from the previous year.



Quote for the day:

"The quality of leadership, more than any other single factor, determines the success or failure of an organization." -- Fred Fiedler & Martin Chemers

Daily Tech Digest - October 24, 2024

The power of prime numbers in computing

Another interesting area where primes pop up in coding is creating hash functions. In a hash function, the primary job is to take an input and transform it into a number that stands in its place. The number is a reduction of the overall input, and this fact makes it useful for many things like checksums and structures like hashtables. Hashing for a hashtable (the hash function for the object being placed into the collection; i.e., Java’s hashCode) uses a modulo of a constant, and that constant is recommended to be a prime. In that case, using a prime for the constant can help reduce the likelihood of collisions. That’s because the primeness of the number makes for a more even distribution of modulus, because there are fewer common denominators with the hashtable’s function. For the same reason, a prime on the hashtable “bucket count” helps prevent asymmetric collisions. In essence, using primes on the hashing constant and bucket count helps to ensure a good random distribution of items in buckets by reducing the likelihood of significant relationships between the two. ... Now let’s flip things around a bit and look at how coding helps us handle and understand one of the classic problems of math: discovering primeness. An ancient algorithm was described by Eratosthenes, working in the 3rd century BC. 


New research reveals AI adoption on rise, but challenges remain in data governance and ROI realisation

Commenting on the survey, Noshin Kagalwalla, Vice President & Managing Director, SAS India, said: “Indian companies are undoubtedly making progress in AI adoption, but significant work remains. The challenge lies not only in deploying AI but also in a way that it is trustworthy, scalable, and aligned with long-term business objectives. Strategic investments in data governance and AI infrastructure will be crucial to driving sustainable AI performance across industries in India.” “The disparity in target outcomes between AI Leaders and AI Followers demonstrates a lack of clear strategy and roadmap. Where AI Followers are focused on short-term, productivity-based results, AI Leaders have moved beyond these to more complex functional and industry use cases,” said Shukri Dabaghi, Senior Vice President, Asia Pacific and EMEA Emerging at SAS. “As businesses look to capitalise on the transformative potential of AI, it’s important for business leaders to learn from the differences between an AI Leader and an AI Follower. Avoiding a ‘gold rush’ way of thinking ensures long-term transformation is built on trustworthy AI and capabilities in data, processes and skills,” said Mr. Dabaghi.


Dulling the impact of AI-fueled cyber threats with AI

Organizations that wish to curb the burgeoning impact of AI on their cyber risks need to be particularly vigilant while taking advantage of the abilities of AI to stem this tide of attacks. With AI capable of analyzing vast amounts of data, it can detect anomalies across their operations, such as spikes in network traffic, unusual user activities, and even suspicious mail. This approach also reduces the time taken for companies to respond to attacks. Automation, too, can be applied to processes such as cyber threat hunting and vulnerability assessments while rapidly mitigating potential damage in the event of a cyberattack. Moreover, AI can reduce false positives more effectively than rule-based security systems. Contextualizing patterns and identifying potential threats can minimize alert fatigue and optimize the use of resources. Organizations can even take pre-emptive steps to stop future attacks before they happen with AI’s predictive capabilities. AI can also personalize training for employees more vulnerable to social engineering attacks. Then there’s reinforcement learning, a type of machine learning model that trains algorithms to make effective cybersecurity decisions. 


6 Essential Components of a Successful Security ‘Rewards Program’ for Software Developers

To effectively gauge developers’ security capabilities, evaluations should extend beyond training and skill assessments to analyze their behavior during code production. With these benchmarks in place, consider the following questions: How many mistakes are developers still making? Are they learning from their mistakes and fixing security bugs? Are they coaching peers to develop codes securely? Do they conduct peer review pull reviews for security flaws? ... We understand that developer teams are under pressure to produce better code faster. As a result, they may view security as a barrier to innovation, leading them to take shortcuts or ignore vulnerabilities entirely. To evaluate the current security culture and the mentorship provided to developers, it is important to assess not only whether they are coaching their peers but also the depth and effectiveness of their guidance and how it impacts their own security practices. By establishing a baseline to verify developers’ secure coding skills and measurement, security teams will get a clear sense of how well they are producing secure code from the beginning. 


Angular’s Approach to Partial Hydration

Janiuk noted there was a lot of confusion about what hydration actually means, so she began by defining it. “It is a server-side rendering initial load optimization for web apps,” she told the audience. She then walked through what actually happens during hydration. “We’ve got a little happy web server here, and that web server has your application on it,” she said. “That web server is like, ‘Great, I’m going to render that out,’ but what it actually just does is it generates some DOM nodes.” The DOM nodes end up being just a string that is passed off to the client browser, which render the HTML, she continued. ... The hydration process is essentially causing the browser to load the application. “It’s the meshing together of the DOM that was rendered by your web server and the application waking up and identifying what that DOM is — that’s the process of hydration, remeshing together your application code with the DOM,” she said. Rather than fully hydrate the application immediately, partial hydration allows developers to identify portions of their application — maybe a footer or something that a user will not immediately need to see — and rather than ship all of the JavaScript in the app, it “hydrates” only the parts that are needed immediately.


Overconfidence in Cybersecurity: A Hidden Risk

Overconfidence in cybersecurity is a serious and often overlooked risk. Too many companies believe that investing in the latest tools and hiring top talent guarantees safety. But it doesn't. Without constantly adapting your strategy, even the best technology won’t protect you. The greatest danger might not come from hackers, but from your own false sense of security. It’s easy to think that spending millions on sophisticated tools will keep threats at bay. The more rigid your approach, the more exposed you become. Cyber threats evolve constantly -- if you don’t keep up, you’re inviting risk. ... As threats grow to be more sophisticated, companies are doubling down on technology to defend themselves. The more you rely on tools without oversight, the more exposed you become. Don’t assume you’re safe just because you’ve invested heavily in security. By streamlining, auditing, and focusing on the human element, you can avoid the pitfalls of overconfidence. In cybersecurity, confidence should come from having the right processes and people -- not just the latest tools. By following these steps and learning from cases like Uber, you’ll strengthen your defenses and avoid the dangers of overconfidence. It’s not about having more tech -- it’s about using it effectively.


4 Key Reasons to Build a Data Culture

Building a data culture within an organization fosters numerous benefits that can significantly enhance organizational development. A data-driven environment encourages informed decision-making by leveraging accurate and timely information. This leads to more strategic planning and problem-solving, as decisions are based on empirical evidence rather than intuition or anecdotal experiences. Consequently, this reduces risks and increases the likelihood of successful outcomes. ... By leveraging data analytics, companies can extract valuable insights from vast amounts of raw data, enabling them to make informed decisions that drive growth and efficiency. Business intelligence (BI) goes a step further by transforming these insights into actionable strategies that align with the company’s objectives. ... Leveraging a robust data culture for strategic planning and performance improvement is pivotal in today’s competitive landscape. By fostering a culture where data is integral to decision-making processes, businesses can systematically analyze trends, forecast outcomes, and identify potential challenges before they escalate. 


Exploring the Transformative Potential of AI in Cybersecurity

AI-powered systems can monitor network traffic in real-time, automatically identifying and prioritizing potential threats. These systems can correlate data from multiple sources, providing a holistic view of the security landscape and enabling faster, more informed decision-making. AI can automate the process of threat intelligence gathering and analysis. By continuously scanning the dark web, hacker forums and other sources, AI systems can provide up-to-date intelligence on emerging threats, attack techniques, and vulnerabilities. This real-time intelligence allows security teams to proactively update defenses and patch vulnerabilities before they can be exploited. Perhaps the most exciting potential of AI in cybersecurity lies in its predictive capabilities. By analyzing historical data and current trends, AI systems can forecast potential future attacks and vulnerabilities. ... While the potential of AI in cybersecurity is immense, it’s not without challenges. AI systems are only as good as the data they’re trained on, and ensuring the quality and diversity of training data is crucial. There’s also the risk of adversarial AI, where attackers use AI to evade detection or launch more sophisticated attacks.


Connected Vehicles and Data Privacy & Sovereignty in the Global South

In addition to data privacy, the rise of connected vehicles raises concerns about data sovereignty. Data sovereignty refers to the handling and control of data in line with a country's legal frameworks, practices, cultural norms, and laws, including those related to data protection, competition, and national security. It may involve ensuring that countries retain “control” over their residents’ and government data; consequently, relevant policies may include conditions on data transfers and restrictions on reliance on foreign technology that could lead to data being stored overseas. The presence of foreign-connected vehicles roaming a country’s streets raises digital sovereignty concerns. Many experts and scholars push back on equating digital sovereignty with other threats to a nation’s sovereignty. For example, Chander and Sun argue that European concerns regarding the dominance of large platforms are “misplaced.” “It is like arguing that because people drive Toyota cars on U.S. roads, we no longer control our streets. As long as the cars are regulated by local law, the fact that they might be built abroad should not undermine sovereignty,” they contend. However, with connected vehicles now widespread, has this dynamic shifted? 


What Are Hierarchical Security Practices in DevOps?

Adopting hierarchical security practices in DevOps brings several benefits. By integrating security checks at every stage, organizations can ensure a smoother release process and enhance reliability. This approach also encourages collaboration by making security a shared responsibility across development, testing, and operations teams, breaking down silos and fostering a culture of security mindfulness. However, there are challenges to consider. Implementing security measures across all levels demands careful coordination, especially for larger or distributed teams. The initial phase of adopting these practices may slow development as teams adjust to new tools and protocols. Moreover, hierarchical security is resource-intensive, requiring time, training, and investment in appropriate tools. Beyond the technical aspects, there is also a cultural shift required — team members must embrace security as an integral part of their roles, which can sometimes meet resistance. Organizations need to balance these benefits and challenges carefully, tailoring their hierarchical security approach to fit their specific needs, goals, and resources. 



Quote for the day:

"The secret of getting things done is to act!" -- Dante Alighieri

Daily Tech Digest - October 23, 2024

What Is Quantum Networking, and What Might It Mean for Data Centers?

Conventional networks shard data into packets and move them across wires or radio waves using long-established networking protocols, such as TCP/IP. In contrast, quantum networks move data using photons or electrons. It leverages unique aspects of quantum physics to enable powerful new features like entanglement, which effectively makes it possible to verify the source of data based on the quantum state of the data itself. ... Because quantum networking remains a theoretical and experimental domain, it's challenging to say at present exactly how quantum networks might change data centers. What does seem clear, however, is that data center operators seeking to offer full support for quantum devices will need to implement fundamentally new types of network infrastructure. They'll need to deploy infrastructure resources like quantum repeaters, while also ensuring that they can support whichever networking standards might emerge in the quantum space. The good news for the fledgling quantum data center ecosystem is that true quantum networks aren't a prerequisite for connecting quantum computers. It's possible for quantum machines themselves to send and receive data over classical networks by using traditional computers and networking devices as intermediaries.


Unmasking Big Tech’s AI Policy Playbook: A Warning to Global South Policymakers

Rather than a genuine, inclusive discussion about how governments should approach AI governance, what we are witnessing instead is a clash of seemingly competing narratives swirling together to obfuscate the real aspirations of big tech. The advocates of open-source large language models (LLMs) present themselves as civic-minded, democratic, and responsible, while closed-source proponents position themselves as the responsible stewards of secure, walled-garden AI development. Both sides dress their arguments with warnings about dire consequences if their views aren’t adopted by policymakers. ... For years, tech giants have employed scare tactics to convince policymakers that any regulation will stifle innovation, lead to economic decline, and exclude countries from the prestigious digital vanguard. These dire warnings are frequently targeted, especially in the Global South, where policymakers often lack the resources and expertise to keep pace with rapid technological advancements, including AI. Big tech’s polished lobbyists offer what seems like a reasonable solution, workable regulation" — which translates to delayed, light-touch, or self-regulation of emerging technologies. 


AI Agents: A Comprehensive Introduction for Developers

The best way to think about an AI agent is as a digital twin of an employee with a clear role. When any individual takes up a new job, there is a well-defined contract that establishes the essential elements — such as job definition, success metrics, reporting hierarchy, access to organizational information, and whether the role includes managing other people. These aspects ensure that the employee is most effective in their job and contributes to the overall success of an organization. ... The persona of an AI agent is the most crucial aspect that establishes the key trait of an agent. It is the equivalent of a title or a job function in the traditional environment. For example, a customer support engineer skilled in handling complaints from customers is a job function. It is also the persona of an individual who performs this job. You can easily extend this to an AI agent. ... A task is an extension of the instruction that focuses on a specific, actionable item within the broader scope of the agent’s responsibilities. While the instruction provides a general framework covering multiple potential actions, a task is a direct, concrete action that the agent must take in response to a particular user input.


AI in compliance: Streamlining HR processes to meet regulatory standards

With the increasing focus on data protection laws like the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and India’s Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 under the Information Technology Act, 2000, maintaining the privacy and security of employee data has become paramount. The Indian IT Privacy Law mandates that companies ensure the protection of sensitive personal data, including employee information, and imposes strict guidelines on how data must be collected, processed, and stored. AI can assist HR teams by automating data management processes and ensuring that sensitive information is stored securely and only accessed by authorized personnel. AI-driven tools can also help monitor compliance with data privacy regulations by tracking how employee data is collected, processed, and shared within the organization. ... This proactive monitoring reduces the likelihood of non-compliance and minimizes risks associated with data breaches, helping organizations align with both international and domestic privacy laws like the Indian IT Privacy Law.


Are humans reading your AI conversations?

Tools like OpenAI’s ChatGPT and Google’s Gemini are being used for all sorts of purposes. In the workplace, people use them to analyze data and speed up business tasks. At home, people use them as conversation partners, discussing the details of their lives — at least, that’s what many AI companies hope. After all, that’s what Microsoft’s new Copilot experience is all about — just vibing and having a chat about your day. But people might share data that’d be better kept private. Businesses everywhere are grappling with data security amid the rise of AI chatbots, with many banning their employees from using ChatGPT at work. They might have specific AI tools they require employees to use. Clearly, they realize that any data fed to a chatbot gets sent to that AI company’s servers. Even if it isn’t used to train genAI models in the future, the very act of uploading data could be a violation of privacy laws such as HIPAA in the US. ... Companies that need to safeguard business data and follow the relevant laws should carefully consider the genAI tools and plans they use. It’s not a good idea to have employees using a mishmash of tools with uncertain data protection agreements or to do anything business-related through a personal ChatGPT account.


CIOs recalibrate multicloud strategies as challenges remain

Like many enterprises, Ally Financial has embraced a primary public cloud provider, adding in other public clouds for smaller, more specialized workloads. It also runs private clouds from HPE and Dell for sensitive applications, such as generative AI and data workloads requiring the highest security levels. “The private cloud option provides us with full control over our infrastructure, allowing us to balance risks, costs, and execution flexibility for specific types of workloads,” says Sathish Muthukrishnan, Ally’s chief information, data, and digital officer. “On the other hand, the public cloud offers rapid access to evolving technologies and the ability to scale quickly, while minimizing our support efforts.” Yet, he acknowledges a multicloud strategy comes with challenges and complexities — such as moving gen AI workloads between public clouds or exchanging data from a private cloud to a public cloud — that require considerable investments and planning. “Aiming to make workloads portable between cloud service providers significantly limits the ability to leverage cloud-native features, which are perhaps the greatest advantage of public clouds,” Muthukrishnan says.


DevOps and Cloud Integration: Best Practices

CI/CD practices are crucial for DevOps implementation with cloud services. Continuous integration regularly merges code changes into a shared repository, where automated tests are run to spot issues early. On the other hand, continuous deployment improves this practice by automatically deploying changes (once they pass tests) to production. The CI/CD approach can accelerate the release cycle and enhance the overall quality of the software. ... Infrastructure as Code (IaC) empowers teams to oversee and provision infrastructure via code rather than manual processes. This DevOps methodology guarantees uniformity across environments and facilitates infrastructure scalability in cloud-based settings. It represents a pivotal element in transforming any enterprise's DevOps strategy. ... According to DevOps experts(link is external), security needs to be a part of every step in the DevOps process, called DevSecOps. This means adding security checks to the CI/CD pipeline, using security tools for the cloud, and always checking for security issues. DevOps professionals usually stress how important it is to tackle security problems early in the development process, called "shifting left."


Data Resilience & Protection In The Ransomware Age

Backups are considered the primary way to recover from a breach, but is this enough to ensure that the organisation will be up and running with minimal impact? Testing is a critical component to ensuring that a company can recover after a breach and provides valuable insight into the steps that the company will need to take to recover from a variety of scenarios. Unfortunately, many organisations implement measures to recover but fail on the last step of their resilience approach, namely testing. Without this step, they cannot know if their recovery strategy is effective. Testing is a critical component as it provides valuable insight into the steps it needs to take to recover, what works, and what areas it needs to focus on for the recovery process, the amount of time it will take to recover the files and more. Without this, companies will not know what processes to follow to restore data following a breach, as well as timelines to recovery. Equally, they will not know if they have backed up their data correctly before an attack if they have not performed adequate testing. Although many IT teams are stretched and struggle to find the time to do regular testing, it is possible to automate the testing process to ensure that it occurs frequently.


Is data gravity no longer centered in the cloud?

The need for data governance and security is escalating as AI becomes more prevalent. Organizations are increasingly aware of the risks associated with cloud environments, especially regarding regulatory compliance. Maintaining sensitive data on premises allows for tighter controls and adherence to industry standards, which are often critical in AI applications dealing with personal or confidential information. The convergence of these factors signals a broader reevaluation of cloud-first strategies, leading to hybrid models that balance the benefits of cloud computing with the reliability of traditional infrastructures. This hybrid approach facilitates a tailored fit for various workloads, optimizing performance while ensuring compliance and security. ... Data can exist on any platform, and accessibility should not be problematic regardless of whether data resides on public clouds or on premises. Indeed, the data location should be transparent. Storing data on-prem or with public cloud providers affects how much an enterprise spends and the data’s accessibility for major strategic applications, including AI. Currently, on-prem is the most cost-effective AI platform—for most data sets and most solutions. 


Choosing Between Cloud and On-Prem MLOps: What's Best for Your Needs?

The big benefit of cloud MLOps is the availability of virtually unlimited quantities of CPU, memory, and storage resources. Unlike on-prem environments, where resource capacity is limited by the amount of servers available and the resources each one provides, you can always acquire more infrastructure in the cloud. This makes cloud MLOps especially beneficial for ML use cases where resource needs vary widely or are unpredictable. ... On-prem MLOps may also offer better performance. On-prem environments don't require you to share hardware with other customers (which the cloud usually does), so you don't have to worry about "noisy neighbors" slowing down your MLOps pipeline. The ability to move data across fast local network connections can also boost on-prem MLOps performance, as can running workloads directly on bare metal, without a hypervisor layer reducing the amount of resources available to your workloads. ... You could also go on, under a hybrid MLOps approach, to deploy your model either on-prem or in the cloud depending on factors like how many resources inference will require. 



Quote for the day:

"You'll never get ahead of anyone as long as you try to get even with him." -- Lou Holtz

Daily Tech Digest - October 22, 2024

GenAI surges in law firms: Will it spell the end of the billable hour?

All areas of law will use genAI, according to Joshua Lenon, Clio’s Lawyer in Residence. That’s because AI content generation and task automation tools can help the business side and practice efforts of law firms. However, areas that have repetitive workflows and large document volumes – like civil litigation – will adopt genAI e-discovery tools more quickly. Practice areas that charge exclusively flat fees – like traffic offenses and immigration – are already the largest adopters of genAi. ... Nearly three-quarters of a law firm’s hourly billable tasks are exposed to AI automation, with 81% of legal secretaries’ and administrative assistants’ tasks being automatable, compared to 57% of lawyers’ tasks, according a survey of both legal professionals (1,028) and another adults (1,003) in the U.S. general population, by Clio. Hourly billing has long been the preference of many professionals, from lawyers to consultants, but AI adoption is upending this model where clients are charged for the time spent on services. ... People have been talking about the demise of the billable hour for about 30 years “and nothing’s killed it yet,” said Ryan O’Leary, research director for privacy and legal technology at IDC. “But if anything will, it’ll be this.”


IT security and government services: Balancing transparency and security

For cyber defenses, government IT leaders should invest in website hosting services with Secure Sockets Layer (SSL) encryption, and further enhancing security with HTTP Strict Transport Security (HSTS). These measures ensure that all data exchanged via government sites is encrypted, protecting resident self-service features such as online voter registration, permit submissions, utility bill payments, and more. By enforcing HSTS, websites are also protected from protocol downgrade attacks and cookie hijacking, ensuring that all connections remain secure, and reducing the risk of data interception. Other marks of a reliable website hosting solution provider include DDoS mitigation coverage and reliability around regular software patching and updates. For all digital partners, it’s essential to consider third-party risk. Some of the most valuable information residents should be able to access – meeting minutes, agendas, and other documents pertaining to local governing decisions – are hosted by document management vendors. To ensure this access is secure, each vendor must be vetted on its security capabilities, so that critical data is always protected, and hackers are not able to prevent access for residents or laterally move further into government networks.


Software buying trends are changing: From SaaS to outcome as a service

The last decade saw the rise of Software-as-a-Service (SaaS), transforming how businesses approached software deployment. This decade belongs to Outcomes-as-a-Service. CIOs are no longer interested in building large internal developer teams or experimenting with different platforms. They seek business impacting solutions with tangible outcomes that drive business success. Business teams need solutions that deliver results today, not tomorrow. ... AI-powered hyperautomation combines generative AI, BPM, RPA, integrations, analytics, and app-building to drive end-to-end outcomes. In today’s dynamic business environment, an integrated approach is essential. Siloed automation with narrowly focused platforms is no longer sufficient. ... AI-platforms excel in delivering outcomes at speed and scale. Leveraging automation expertise, they ensure outcomes linked to growth, efficiency, and compliance. The platform implements continuous cycles of process mining, implementation, adoption, and solution refinement until desired objectives are met.They also offer a comprehensive solution, managing everything from process definition and refinement to platform implementation, support, application development, and adoption. 


How Retailers Are Using Tech for Competitive Advantage

“While technology can streamline operations, an overreliance on automation without human touch can sometimes backfire,” Peters says. “Consumers still value human interaction, especially in complex support scenarios. It’s crucial for retailers to balance automation with human agents, particularly in areas that require empathy and nuanced decision-making.” ... Companies of all sizes benefit from greater organizational efficiency, and tech has been the fuel powering digital transformation. For example, Lowes uses AR for home improvement shopping while Sephora uses it for virtual make up try-ons. Walmart is stepping up automation in its battle against Amazon. But smaller retailers are benefiting, too. ... “One of our customer’s last large-scale automation took them five years from the time they started the concept to deployment,” Naslund says. “For context, the pandemic, was four and a half years, and the amount of volatility that the supply chain saw over the four years was insane. We saw inventory gluts, inventory shortages, and panic buying. Then you saw a warehouse shortage capacity, everybody's panicking to get warehouses. Then, they suddenly have too much space.”


Why and How IT Leaders Can Embrace the AI Revolution

AI software certainly has some consequences for IT departments. There may be some new types of workflows to manage, new user requests to support, and new application deployments to track. But unless your business is actually building complex AI solutions from scratch — which it probably isn't or shouldn't because sophisticated, mature AI tools and services are available from external vendors, complete with support plans and SLAs — implementing AI is not actually that challenging. That's because most third-party AI solutions boil down to SaaS apps that work just like any other SaaS: The vendor builds, manages, and supports them, with few resources and little effort necessary on the part of customers' IT departments. From the perspective of IT, implementing AI isn't all that different from implementing any other type of software. ... For IT, there are really not any novel data privacy or security risks at stake here. The app ingests financial data, but so do plenty of non-AI applications. IT's responsibility when it comes to managing data security for this type of app boils down to vetting the vendor by reviewing its data management and compliance practices. The fact that the app uses AI doesn't change this process.


Has the time come for integrated network and security platforms?

Interest in platformization is growing among enterprises, asserts Extreme Networks, which recently surveyed 200 CIOs and senior IT leaders for its research, CIO Insights Report: Priorities and Investment Plans in the Era of Platformization. ... A platform that helps organizations transition their network to the cloud to streamline IT efficiency and lower total cost of ownership is important, respondents said. In addition, 55% of respondents emphasized the need to integrate from a broad ecosystem of networking and security offerings, indicating a clear demand for unified platforms, Extreme concluded. ... “The message I got from the survey was that customers are operating in a world where there’s a massive proliferation of products, or applications, and that’s really translating into complexity. Complexity is equal to risk, and that complexity is happening in multiple places,” said Extreme Networks CTO Nabil Bukhari. Complexity is an interesting topic because it changes, Bukhari said. The first Ford cars were basically just an engine with brakes, but they were complicated to start and drive. “Now, if you look at a car, they are like data centers on wheels. But driving and owning them is exponentially easier,” Bukhari said.


How legacy IT systems can hold your business back

While legacy IT systems may still be functional, they can hold a business back from reaching its full potential – especially if market competitors are busy upgrading their own systems. Companies need to carefully evaluate the costs and benefits of keeping legacy systems in place and develop a plan to modernize their IT infrastructure. Investing in a modern data center solution can, over time, improve business agility, security, and your organization’s bottom line. ... This is especially true when it comes to next-generation applications using LLMs and machine learning (ML) for AI-dependent applications. Enterprise servers, storage and networking hardware, and software manufactured before about 2016 were not designed with scaled-up data workloads in mind – especially workloads for genAI, which just started to take off in 2021. This can hinder growth and force companies to invest in additional hardware or software just to maintain their current operations. Legacy systems are also more prone to failures and outages due to aging hardware and software. This downtime disrupts operations and leads to lost revenue, especially for critical business functions. Additionally, data loss from system crashes can be costly to recover from.


Architecture Inversion: Scale by Moving Computation, Not Data

Now why should the rest of us care, blessed as we are with a lack of most of the billions of users TikTok, Google and the likes are burdened with? A number of factors are becoming relevant:ML algorithms are improving and so is local compute capacity, meaning fully scoring items gives a larger boost in quality and ultimately profit than used to be the case. With the advent of vector embeddings, the signals consumed by such algorithms have grown by one to two orders of magnitude, making the network bottleneck more severe. Applying ever more data to solve problems is increasingly cost effective, which means more data needs to be rescored to maintain a constant quality loss. As the consumers of data from such systems move from being mostly humans to mostly LLMs in RAG solutions, it becomes beneficial to deliver larger amounts of scored data faster in more applications than before. ... For these reasons, the scaling tricks of the very biggest players are becoming increasingly relevant for the rest of us, which has led to the current proliferation of architecture inversion, going from traditional two-tier systems where data is looked up from a search engine or database and sent to a stateless compute tier to inserting that compute into the data itself.


The secret to successful digital initiatives is pretty simple, according to Gartner

As with all technologies, seeing results from AI comes down to focusing like a laser beam on the problem at hand: "In my experience, the businesses that start with a real use case and problem are seeing an ROI," Julian LaNeve, chief technology officer at Astronomer, a data platform company, told ZDNET. "They define a well-scoped, impactful problem and use gen AI to solve [it], and it's easy to measure success and ROI. The most successful business cases identify how to solve a problem that the business already cares deeply about and [will] deliver additional value to customers." Technology maturity also makes a difference in success rates. "Previous generations of AI were narrower in scope but have been successful," said Dominic Sartorio, vice president at Denodo, a data management provider. "AI is helping with predictive maintenance of manufactured goods, predicting demand spikes in [the] markets, and finding the optimal routes for logistics, and [has] been successful for many years." Furthermore, according to Gartner, companies that treat their digital initiatives in a collaborative fashion -- between business and IT leaders -- rather than leaving all things digital up to their IT departments are successful with technology. 


Showing AI users diversity in training data can boost perceived fairness and trust

The work investigated whether displaying racial diversity cues—the visual signals on AI interfaces that communicate the racial composition of the training data and the backgrounds of the typically crowd-sourced workers who labeled it—can enhance users' expectations of algorithmic fairness and trust. Their findings were recently published in the journal Human-Computer Interaction. AI training data is often systematically biased in terms of race, gender and other characteristics, according to S. Shyam Sundar, Evan Pugh University Professor and director of the Center for Socially Responsible Artificial Intelligence at Penn State. "Users may not realize that they could be perpetuating biased human decision-making by using certain AI systems," he said. Lead author Cheng "Chris" Chen, assistant professor of communication design at Elon University, who earned her doctorate in mass communications from Penn State, explained that users are often unable to evaluate biases embedded in the AI systems because they don't have information about the training data or the trainers. "This bias presents itself after the user has completed their task, meaning the harm has already been inflicted, so users don't have enough information to decide if they trust the AI before they use it," Chen said



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain