Daily Tech Digest - September 30, 2024

What Will Be the Next Big Thing in AI?

The next big thing in AI will likely be advanced multimodal models that can seamlessly integrate and process different types of data, including text, images, audio, and video, in more human-like ways, says Dinesh Puppala, regulatory affairs lead at Google. "We're moving beyond models that specialize in one type of data toward AI systems that can understand and generate across multiple modalities simultaneously, much like humans do," he notes. Advanced multimodal models will enable more natural and context-aware human-AI interactions. "They'll be better at understanding nuanced queries, interpreting visual and auditory cues, and providing more holistic and relevant responses," Puppala predicts. ... Metacognition in AI -- systems that can think about the way they think -- is on the mind of Isak Nti Asare, co-director of the cybersecurity and global policy program at Indiana University. "This capability, often described as AI self-awareness, is a necessary frontier to cross if we are to build trustworthy systems that can explain their decisions." Current AI systems, while advanced, often operate as "black boxes" where even their creators cannot fully explain their outputs. 


Best Practices for Sunsetting Mainframe Applications

The first crucial step in migrating from a mainframe to the cloud is the discovery phase. During this phase, organizations must conduct a thorough assessment of their current mainframe environment, including architecture, applications, data, dependencies, and workflows. This comprehensive understanding helps in identifying potential risks and planning the migration process effectively. The insights gained are crucial for setting the stage for the subsequent cost-benefit analysis (CBA), ensuring all stakeholders are on board with the proposed changes. A detailed CBA is essential to evaluate the financial feasibility and potential returns of the migration project. This analysis should account for all costs associated with the migration, including software licensing, cloud storage fees, and ongoing maintenance costs. It should also highlight the benefits, such as improved operational efficiency and reduced downtime, which are crucial for gaining stakeholder support. ... Effective risk management is crucial for a successful migration. This involves ensuring the availability of subject matter experts, comprehensive planning, and addressing potential issues with legacy systems. 


Security spending signals major role change for CISOs and their teams

“Expected to do more with less,” CISOs are shifting their focus, Kalinov adds. “Instead of beefing up their internal teams, they’re focusing on risk management, regulatory compliance, and keeping C-suite executives aware of the evolving security landscape,” Kalinov says. James Neilson, SVP of international sales at cybersecurity vendor OPSWAT, believes the increasing allocation of security budgets toward software and services rather than staff reflects the CISO’s evolving role from managing internal teams toward becoming a more strategic, technology-driven leader. “This trend also indicates that they’re taking on a more prominent role in risk management, ensuring that outsourced services complement internal capabilities while maintaining agility in response to evolving threats,” Neilson says. As a result, security organizations are also undergoing a shift from traditionally siloed, in-house approaches toward a more integrated, outsourced, and technology-driven model, Neilson argues. ... “Organizations increasingly rely on elements of external managed services and advanced automation tools to manage cybersecurity, focusing internal resources on understanding the business and its risks, defining higher-level strategy, oversight, and risk management,” Neilson contends.


Shadow AI, Data Exposure Plague Workplace Chatbot Use

The issue is that most of the most prevalent chatbots capture whatever information users put into prompts, which could be things like proprietary earnings data, top-secret design plans, sensitive emails, customer data, and more — and send it back to the large language models (LLMs), where it's used to train the next generation of GenAI. ... ChatGPT’s creator, OpenAI, warns in its user guide, "We are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations." But it's hard for the average worker to constantly be thinking about data exposure. Lisa Plaggemier, executive director of NCA, notes one case that illustrates how the risk can easily translate into real-world attacks. "A financial services firm integrated a GenAI chatbot to assist with customer inquiries," Plaggemier tells Dark Reading. "Employees inadvertently input client financial information for context, which the chatbot then stored in an unsecured manner. This not only led to a significant data breach, but also enabled attackers to access sensitive client information, demonstrating how easily confidential data can be compromised through the improper use of these tools."


Can AWS hit its sustainability targets and win the AI race?

“Once we achieve that goal, we're looking at what's next beyond that?,” Walker says. “As you look beyond just wind and solar, we need to look at what else is in our tool belt, especially looking further ahead to 2040 and how we're going to reach those ultimate goals, and carbon-free energy sources are the next evolution of that.” When asked whether carbon-free energy to the company means nuclear, geothermal, or something else, Walker says the company is open. “We're not limiting the options; we're looking beyond the traditional renewable sources and seeing what else there is. Carbon-free energy sources are going to be one of the tools that we're going to double down on and start looking at.” ... When asked if AWS will look to acquire more data centers close to nuclear plants or merely sign more PPAs that involve nuclear power, Walker says the company is looking at “all of the above.” “We haven't limited our options in terms of capacity. Depending on where we're building and at the rate we need to scale, [it's] certainly going to be part of the conversation.” Longer term, fusion energy could perhaps power the company’s data centers. Microsoft and OpenAI have invested in Helion, which is promising to crack the elusive technology before 2030. Google has invested in Tae Technologies.


6 ways to apply automation in devsecops

Securing continuous development processes is an extension of collaboration security. In most organizations today, multiple individuals on multiple teams write code every day — fixing bugs, adding new features, improving performance, etc. Consider an enterprise with three different teams contributing to the application code. Each is responsible for its own area. Once Team 1 checks in updated code, the build manager needs to ensure that this new code is compatible with code already contributed by Teams 2 and 3. The build manager creates a new build and scans it to ensure there are no vulnerabilities. With so much code being contributed, automation is critical. Only by automating the build creation, compatibility, and approval cycle can a business ensure that each step is always taken and done in a consistent manner. ... For larger enterprises, which may have thousands of developers checking in code daily, automation is a matter of survival. Even smaller companies must begin putting automated processes in place if they want to keep their developers productive while ensuring the security of their code.


AI, AI Everywhere! But are we Truly Prepared?

AI models are reflections of the massive database they feed on and the entire internet is on its plate. Every time a user runs a query or prompts the model for a certain search requirement, the AI runs it through the maximum accessible data in its capacity, figures out relevant touchpoints, and frames the responses as demanded by the user using its intelligent capabilities. However, not surprising that the ever-learning and self-evolving capabilities of AI models suck in more power than its search and response process. The volume of users due to the soaring popularity of the technology further adds to the power consumption. ... The exercise lights up a directional path for the artificially intelligent technology to learn and evolve in accordance. This entire process of training an AI model can range anywhere from a few minutes to several months. And, throughout the process, GPUs powering the machines keep running daylong eating into large volumes of power. On the bright side, experts have pointed out that specialised AI models are significantly more efficient in power consumption than generic models. 


Ransomware attackers hop from on-premises systems to cloud

“Storm-0501 is the latest threat actor observed to exploit weak credentials and over-privileged accounts to move from organizations’ on-premises environment to cloud environments. They stole credentials and used them to gain control of the network, eventually creating persistent backdoor access to the cloud environment and deploying ransomware to the on-premises,” Microsoft shared last week. ... “Microsoft Entra Connect Sync is a component of Microsoft Entra Connect that synchronizes identity data between on-premises environments and Microsoft Entra ID,” Microsoft explained. “We can assess with high confidence that in the recent Storm-0501 campaign, the threat actor specifically located Microsoft Entra Connect Sync servers and managed to extract the plain text credentials of the Microsoft Entra Connect cloud and on-premises sync accounts. The compromise of the Microsoft Entra Connect Sync account presents a high risk to the target, as it can allow the threat actor to set or change Microsoft Entra ID passwords of any hybrid account.” The second approach – hijacking a Domain Admin user account that has a respective user account in Microsoft Entra ID – is also possible.


How AI is transforming business today

“We’re seeing lots of efficiencies where back, middle, and front-end workflows are being automated. So, yes, you can automate your existing processes, and that’s good and you can get a 20% [improvement in efficiency]. But the real gain is to reimagine the process itself,” she says. In fact, the gains AI can bring when used to reimagine processes is so significant that she says AI challenges the very concept of “process” itself. That’s because organizations can use AI to devise ways to reach specific desired outcomes without having a bias toward keeping and improving existing workflows. “Say you want to increase customer satisfaction by 35%. That’s the input. It’s less about how the process works. The process itself becomes almost irrelevant,” she explains. “The technology is good at achieving an object, a goal, and the concept of process itself, the sequence itself, is blown away. That is conceptually a big shift when you think of the enterprise, which is built on three things: people, process, and technology, and here’s a technology — AI — that doesn’t care about a process but is instead focused on outcome. That is truly disruptive.”


California Gov. Newsom Vetoes Hotly Debated AI Safety Bill

Newsom said he had asked generative AI experts, including Dr. Li, Tino Cuéllar of the National Academy of Sciences Committee on Social and Ethical Implications of Computing Research, and Jennifer Tour Chayes from the College of Computing, Data Science, and Society at UC Berkeley, to help California develop "workable guardrails" that focused on "developing an empirical, science-based trajectory analysis." He also asked state agencies to expand their assessment of AI risks from potential catastrophic events related to AI use. "We have a responsibility to protect Californians from potentially catastrophic risks of GenAI deployment. We will thoughtfully - and swiftly - work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good," he said. Among the AI bills Newsom has signed are SB 896, which requires California's Office of Emergency Services to expand its work assessing AI's potential threat to critical infrastructure. The governor also directed the agency to undertake the same risk assessment with water infrastructure providers and the communications sector.



Quote for the day:

"Have the dogged determination to follow through to achieve your goal, regardless of circumstances or whatever other people say, think, or do." -- Paul Meyer

Daily Tech Digest - September 29, 2024

Updating Enterprise Technology to Scale to ‘AI Everywhere’

Operational systems with significant unstructured data will face substantial re-architecting due to generative AI’s ability to make use of previously underutilized data sources. In our experience, the most common solution patterns for generative AI use cases in operational systems fall within the areas of content generation, knowledge management, and reporting and documentation ... As generative AI model use cases get deployed across critical systems and complexity increases, it will put further demands on collaboration, quality control, reliability, and scalability. AI models will need to be treated with the same discipline as software code by adopting MLOps processes that use DevOps to manage models through their life cycle. Companies should set up a federated AI development model in line with the AIaaS platform. This should define the roles of teams that produce and consume AI services, as well as the processes for federated contribution and how datasets and models are to be shared. Given the pace of evolution of generative AI, it is also imperative to create AI-first software development processes that allow for rapid iteration of new solutions and architectures. 


EPSS vs. CVSS: What's the Best Approach to Vulnerability Prioritization?

EPSS is a model that provides a daily estimate of the probability that a vulnerability will be exploited in the wild within the next 30 days. The model produces a score between 0 and 1 (0 and 100%), with higher scores indicating a higher probability of exploitation. The model works by collecting a wide range of vulnerability information from various sources, such as the National Vulnerability Database (NVD), CISA KEV, and Exploit-DB, along with evidence of exploitation activity. ... By considering EPSS when prioritizing vulnerabilities, organizations can better align their remediation efforts with the actual threat landscape. For example, if EPSS indicates a high probability of exploitation for a vulnerability with a relatively low CVSS score, security teams might consider prioritizing that vulnerability over others that may have higher CVSS scores but a lower likelihood of exploitability. ... Intruder is a cloud-based security platform that helps businesses manage their attack surface and identify vulnerabilities before they can be exploited. By offering continuous security monitoring, attack surface management, and intelligent threat prioritization, Intruder allows teams to focus on the most critical risks while simplifying cybersecurity.


How To Embrace The Enterprise AI Era

As enterprises rush to adopt AI technologies, there's a growing concern about the responsible use of these powerful tools. Ramaswamy stresses the importance of a thoughtful approach to AI implementation: "We mandated very early that any models that we train needed obviously to only take data that we had free use rights on, but we said they also need to have model cards so that if there is a problem with the data source, you can go back, retrain a model without the data source." ... Developing a robust data strategy is essential for AI success. Organizations need a clear plan for managing, sharing, and leveraging data across the enterprise. This includes establishing data governance policies, ensuring data quality and consistency, and creating a unified data architecture that supports AI initiatives. A well-designed data strategy enables companies to break down silos, improve data accessibility, and create a solid foundation for AI-driven insights and decision-making. Embracing interoperability is another critical aspect of preparing for the enterprise AI era. Companies should look for solutions that support open data formats and easy integration with other tools and platforms. 


The Hidden Language of Data: How Linguistic Analysis Is Transforming Data Interpretation

Unlike conventional methods that focus on structured data, linguistic analysis delves into the complexities of human communication. It examines patterns, context, and meaning in text data, allowing us to extract trends and insights from sources like social media posts, customer reviews, and open-ended survey responses. Linguistic analysis in data science marries principles from the two fields. From linguistics, we borrow concepts like syntax (sentence structure), semantics (meaning), and pragmatics (context). These help us understand not just what words say, but how they’re used and what they imply. On the data science side, we leverage technologies like machine learning and natural language processing (NLP). These technologies allow us to automate the analysis of large volumes of text, identify patterns, and extract meaningful information at scale. ... Sentiment analysis is the process of determining the emotional tone behind words. It analyzes language to understand attitudes, opinions, and emotions expressed within text and identify whether a piece of text is positive, negative, or neutral.


Is Synthetic Data the Future of AI Model Training?

It is likely that the use of synthetic data will increase in the AI space. Gartner anticipates that it will outweigh the use of real data in AI models by 2030. “The use of it is going to grow over time, and if done correctly, [it will] allow us to create more evolved, more powerful, and more numerous models to inform the software that we're building,” Brown predicts. That potential future seems bright, but the road there is likely to come with a learning curve. “Mistakes are going to be made almost undoubtedly in the use of synthetic data initially. You're going to forget a key metric that would judge quality of data,” says Brown. “You're going to implement a biased model of some sort or a model that hallucinates maybe more than a previous model did.” Mistakes may be inevitable, but there will be new ways to combat them. As the use of synthetic data scales, the development of tools for robust quality checks will need to as well. “Just the same way that we've kept food quality high, we [need to] do the same thing to keep the model quality high,” Hazard argues.


Are You Sabotaging Your Cybersecurity Posture?

When ITDR entered the picture in 2020, it was in response to a cybersecurity industry struggling to protect suddenly remote COVID-era workforces with existing identity and access management (IAM) solutions. ... Organizations should never attempt to solve cybersecurity issues they’re not prepared to handle. Investing in the right specialists — whether in-house or externally — and ongoing training is essential to maintaining strong defenses. Your organization will fall behind quickly if your team isn’t continuously evolving. Where business leaders are concerned, cybersecurity is often an attractive place to trim expenses. But businesses simply cannot cut their cybersecurity budget and hope they don’t suffer a breach. Hackers aren’t stopping, so you can’t either. ... Operating on an “it won’t happen to us” mindset will always get your organization in trouble. When it comes to strengthening your organization’s cybersecurity posture, a shift from a reactive to a proactive mindset is crucial to staying ahead of evolving threats and preventing costly and damaging breaches. A comprehensive, identity-focused cybersecurity is the best way to proactively defend against threats. 


Millions of Kia Vehicles Open to Remote Hacks via License Plate

The researchers found that it was relatively easy to register a Kia dealer account and authenticate it to the account. They could then use the generated access token to call APIs reserved for use by dealers, for things like vehicle and account lookup, owner enrollment, and several other functions. After some poking around, the researchers found that they could use their access to the dealer APIs to enter a vehicle's license-plate information and retrieve data that essentially allowed them to control key vehicle functions. These included functions like turning the ignition on and off, remotely locking and unlocking vehicles, activating its headlights and horn, and determining its exact geolocation. In addition, they were able to retrieve the owner's personally identifying information (PII) and quietly register themselves as the primary account holder. That meant they had control of functions normally available only the owner. The issues affected a range of Kia model years, from 2024 and 2025 all the way back to 2013. With the older vehicles, the researchers developed a proof-of-concept tool that showed how anyone could enter a Kia's vehicle license plate info and in a matter of 30 seconds execute remote commands on the vehicle.


How AI is reshaping accounting

For a while, the finance industry started to consider how to provide better information to guide investment decisions beyond just financial performance and ESG through integrated reporting. The term has fallen out of vogue. ... The corollary to continual close is that businesses will be able to make decisions using real-time data. Forrester predicts that over 70% of SMBs will integrate real-time data into financial decisions, empowering them to drive growth and innovation by 2030. Harris acknowledges that today, not all business is captured in real-time. Existing tools and infrastructure are insufficient to capture everything with the assurance that it is reliable. So, accounting data can get out of step by a few days to weeks. The vision is that with the right technology, particularly AI, they can take that delay down to zero to keep accounting data in lockstep with the business.
New opportunities The last prediction is that AI will automate many routine tasks and free accountants to focus on strategic thinking and provide business insights. This will create opportunities for accountants to expand into new roles that improve business strategy and facilitate innovation. 


Harnessing AI and knowledge graphs for enterprise decision-making

Whether a company’s goal is to increase customer satisfaction, boost revenue, or reduce costs, there is no single driver that will enable those outcomes. Instead, it’s the cumulative effect of good decision-making that will yield positive business outcomes. It all starts with leveraging an approachable, scalable platform that allows the company to capture its collective knowledge so that both humans and AI systems alike can reason over it and make better decisions. Knowledge graphs are increasingly becoming a foundational tool for organizations to uncover the context within their data. What does this look like in action? Imagine a retailer that wants to know how many T-shirts it should order heading into summer. A multitude of highly complex factors must be considered to make the best decision: cost, timing, past demand, forecasted demand, supply chain contingencies, how marketing and advertising could impact demand, physical space limitations for brick-and-mortar stores, and more. We can reason over all of these facets and the relationships between using the shared context a knowledge graph provides.


Data Blind Spots and Data Opportunities: What Banks and Credit Unions May Be Missing

Financial services leaders understand that getting the deal done is only half the battle. Effective execution of a merger or acquisition is famously difficult: across all industries, between 70% and 90% of mergers and acquisitions fail to achieve their intended goals or create shareholder value, according to research by McKinsey, Harvard Business Review and others. These failures can be due to a range of factors, including poor strategic fit, cultural clashes, integration challenges, or failure to realize projected synergies. For financial institutions in particular — FDIC data since 2019 indicates that some 4-5% of insured depositories merge annually—M&A can be a way of life and effective integration demands a data-first approach. When management data — such as financial reports, risk assessments, and accountholder information—is consolidated quickly, both institutions can harmonize their strategies, avoid duplicative efforts, and identify risks and synergies earlier. This data integration allows leadership teams to monitor KPIs, streamline operations, and make informed decisions that align with the newly combined FI’s objectives.



Quote for the day:

"Effective team leaders realize they neither know all the answers, nor can they succeed without the other members of the team." -- Katzenbach & Smith

Daily Tech Digest - September 28, 2024

IoT devices will be the catalyst for the 4th industrial revolution

The impact of IoT on product quality is not just reactive but also proactive. IoT-enabled traceability systems ensure that every component of a product can be tracked from its origin to the final assembly, ensuring full compliance with industry standards and regulations. Plus, automated systems can monitor and adjust energy usage in real-time, leading to more efficient operations that lower the overall carbon footprint of a facility. By minimizing energy waste, companies will contribute to a more sustainable environment while also realizing substantial cost savings. These savings can be reinvested into research and development, driving innovation and enhancing product quality. In return, compliance eliminates unnecessary product waste and energy consumption, which then lowers the final cost for consumers while heightening brand reputation. ... By combining the real-time data collection capabilities of IoT devices with AI-driven analytics, IoT technologies can be leveraged to enable the seamless integration of clean energy sources into industrial operations. Solar, wind, and other renewable energy sources can be efficiently managed through smart grids and automated systems that balance the energy load, ensuring that clean energy is utilized to its fullest potential. 


Hackers Weaponizing PDF Files To Deliver New SnipBot Malware

They exploit the all-presence and trustworthiness of PDFs to trick victims into opening malicious files that can contain malicious links, embedded code, or vulnerabilities that allow remote code execution. Security experts at Palo Alto Networks identified recently that hackers have been actively weaponizing PDF files to deliver new SnipBot malware. ... While the SnipBot employs a multi-stage infection process that begins with a signed executable which is disguised as a “PDF.” This uses the anti-sandbox techniques like “checking process names” and “registry entries.” To evade the detection the malware makes use of “Window message-based control-flow obfuscation” and “encrypted strings.” Besides this, it downloads additional payloads like a DLL that injects code into Explorer.exe through “COM hijacking.” The core functionality of SnipBot includes ‘a backdoor (single.dll)’ that creates a “SnipMutex” and enables threat actors to ‘execute commands,’ ‘upload/download files,’ and ‘deploy extra modules.’ ... As the SnipBot, various evasion techniques, payload delivery methods, and post-infection capabilities compromise systems and exfiltrate sensitive data.


Novel Exploit Chain Enables Windows UAC Bypass

Despite the potential for privilege escalation, Microsoft refused to accept the issue as a vulnerability. After Fortra reported it, the company responded by pointing to the "non-boundaries" section of the Microsoft Security Servicing Criteria for Windows, which outlines how "some Windows components and configurations are explicitly not intended to provide a robust security boundary." ... Reguly and Fortra disagree with Microsoft's perspective. "When UAC was introduced, I think we were all sold on the idea that UAC was this great new security feature, and Microsoft has a history of fixing bypasses for security features," he says. "So if they're saying that this is a trust boundary that is acceptable to traverse, really what they're saying to me is that UAC is not a security feature. It's some sort of helpful mechanism, but it's not actually security related. I think it's a really strong philosophical difference." ... Philosophical differences aside, Reguly stresses that businesses need to be aware of the risk in allowing lower-integrity admins to escalate their privileges to attain full system controls.


How factories are transforming their operations with AI

One of the key end goals for the integration of AI in manufacturing is the establishment of 'lights-out factories' which means fully automating everything within the factory environment so that there is minimal to zero need for human input. Such is the lack of a need for human intervention that you can effectively manage the production process with the lights turned off. FANUC is one example of a company that operates a lights-out factory in Japan to build its robots, having done so since 2001. The company makes 50 robots for every 24-hour shift, according to the Association for Manufacturing Technology, with the factory running unsupervised for up to 30 days without human input. Automotive manufacturing is another sector in which AI has been a major positive influence. BMW's AIQX automates certain quality control processes by using sensor technology and AI. Algorithms analyze the data they record in real time and they send employees feedback immediately. It can quickly detect anomalies on the assembly line. Similarly, Rolls Royce has melded data analytics with AI, pulling in masses of data from in-service engines in real time and feeding this into digital twins. 


Beyond encryption: Hidden dangers in the wake of ransomware incidents

One of the most insidious threats in the post-ransomware landscape is the potential presence of multiple threat actors within a compromised environment. This scenario, while relatively rare, can have devastating consequences for victim organizations. The root of this problem often lies in the cyber incident ecosystem itself, particularly in the use of initial access brokers (IABs) by ransomware groups. These IABs, motivated by profit, may sell access to the same compromised network to multiple malicious actors. The result can be a perfect storm of cyber activity, with different groups vying for control of the same systems. ... Another vector for multiple-actor intrusions comes from an unexpected source: the tools used by information security professionals themselves. Malvertising campaigns have become increasingly sophisticated, targeting legitimate software distribution channels to spread compromised versions of popular security tools. Ironically, the very applications designed to protect systems can become Trojan horses for malicious actors. ... The complexity of modern cyber threats underscores the necessity of comprehensive forensic analysis following any security incident.


Prioritize Robust Engineering Over Overblown GenAI Promises

Beyond tackling data quality and scalability concerns, this necessary shift towards engineering innovation will lead to developing tools and frameworks that better support AI workflows, including handling large volumes of unstructured data (including images and videos). That, in turn, will foster a more collaborative and integrated approach between AI and data management practices. As the AI and data stacks complement each other, we can expect more cohesive and innovative solutions that address AI implementation’s technical and operational challenges. ... This maturation process promises substantial benefits beyond the realm of developers and engineers. Just as the dot-com bubble burst led to the refinement and widespread adoption of internet technologies, the current focus on data curation and engineering in AI will pave the way for transformative applications across various industries. Imagine AI-powered healthcare diagnostics that rely on meticulously curated data sets or financial systems that leverage AI for predictive analytics to manage risks more effectively. These advancements aren’t just about enhancing technical capabilities; they’re about improving outcomes for society as a whole.


IT leaders weigh up AI’s role to improve data management

“The important thing in data management is having a solid disaster recovery plan,” says Macario. “In fact, security for an NGO like ours is both a cyber and physical problem because not only are we the target of attacks, but we operate in war zones, where the services provided aren’t always reliable and, in the event of failures, hardware replacement parts are difficult to find.” Innovative encryption and geographic data backup technologies are applied, in particular immutable cloud technology that protects against ransomware. These are supported by AI for endpoint protection. User identities are also managed on the Azure Entra ID platform, which has integrated AI and warns of suspicious activity in real time. ... “We turned to the big technology players to solve the problem and the LLM algorithms led to a turning point, because they allowed us to carry out the analyses,” says Macario. “These are used by our Medical Division departments to analyze access to care and improve quality, obtain statistics, create an archive, and understand what instruments, drugs, and doctors we need in a war context. The data form a scientific basis on which to base our intervention and our ability to report the effects of war on civilian populations.”


Is it possible to save money and run on a public cloud?

In the early days of cloud computing, big providers promoted the migration of applications and data to the cloud without modification or modernization. The advice was to fix it when it got there, not before. Guess what? Workloads were never fixed or modernized. These lift-and-shift applications and data consumed about three times the resources enterprises thought they would. This led to a disenchantment with public cloud providers, even though enterprises also bore some responsibility. ... High cloud costs usually stem from the wrong cloud services or tools, flawed application load estimates, and developers who designed applications without understanding where the cloud saves money. You can see this in the purposeful use of microservices as a base architecture. ... The key to winning this war is planning. You’ll need good architecture and engineering talent to find the right path. This is probably the biggest reason we haven’t gone down this road as often as we should. Enterprises can’t find the people needed to make these calls; it’s hard to find that level of skill. Cloud providers can also be a source of help. Many have begun to use the “O word” (optimization) and understand that to keep their customers happy, they need to provide some optimization guidance. 


Beyond Compliance: Leveraging Security Audits for Enhanced Risk Management

One of the most effective ways to approach risk management in an organization is through a comprehensive security audit. Security audits objectively assess layers of an organization’s security controls, established system and operational policies, and various document procedures. Rather than simply passing or failing a defined list of compliance protocols, a security audit examines all elements of an organization’s security posture. This includes looking for potential weak points in connected networks and systems and finding areas which may be useful but could be improved. ... Security auditing processes can also be built into the organization’s disaster recovery initiatives. As the business tests its incident response protocols throughout the year, pairing this process with a formal audit helps the organization to be better prepared to respond more effectively to operational disruptions. However, the benefits of a security audit aren’t just associated with minimizing operational risks. This proactive security approach can also play an impactful role when demonstrating the organization’s commitment to their customer’s data privacy.


Security, AIOps top mainframe customer challenges

“The increased prioritization of AIOps reflects surging interest in the implementation of emerging technologies on the mainframe. Those reporting the adoption of AIOps on the mainframe increased [9%] from the 2023 BMC Mainframe Survey, while 76% of respondents reported the use of generative AI [genAI] in their organizations,” McKenney wrote. “The power of AI/ML and genAI open a new world of possibility in IT management. Organizations are leveraging these technologies throughout their IT ecosystems to gain real-time insight into security postures, automate issue resolution, gain critical business insight, and onboard and train new personnel,” McKenney wrote. ... Its BMC AMI Platform will feature the BMC AMI Assistant, a chat-based, AI-powered assistant available for developers, operators, system programmers, and IT managers to use for real-time explanations, support, and automation, the company stated. “Whether help is needed to debug code, understand system processes, or make informed decisions and take actions, the BMC AMI Assistant will provide expert guidance instantly, enhancing productivity and reducing downtime. Users will leverage BMC AMI Assistant Tools to capture their local knowledge and integrate it seamlessly into the BMC AMI Assistant,” McKenny wrote in a BMC blog.



Quote for the day:

"The only way to achieve the impossible is to believe it is possible." -- Charles Kingsleigh

Daily Tech Digest - September 27, 2024

What happens when everybody winds up wearing ‘AI body cams’?

The first body cams were primitive. They were enormous, had narrow, 68-degree fields of view, had only 16GB of internal storage, and had batteries that lasted only four hours. Body cams now usually have high-resolution sensors, GPS, infrared for low-light conditions, and fast charging. They can be automatically activated through Bluetooth sensors, weapon release, or sirens. They use backend management systems to store, analyze, and share video footage. The state of the art — and the future of the category — is multimodal AI. ... Using such a system in multimodal AI, a user could converse with their AI agent, asking questions about what the glasses were pointed at previously. These glasses will almost certainly have a dashcam-like feature where video is constantly recorded and deleted. Users can push a button to capture and store the past 30 seconds or 30 minutes of video and audio — basically creating an AI body cam worn on the face. Smart glasses will be superior to body cams, and over time, AI body cams for police and other professionals will no doubt be replaced by AI camera glasses. This raises the question: When everybody has AI body cams — specifically glasses with AI body cam functionality — nwhat does society then look like?


Aligning Cloud Costs With Sustainability and Business Goals

AI is poised for democratization, similar to the cloud. Users will have the choice and ability to use multiple models for numerous use cases. Future trends indicate a rise in culturally aware and industry-specific models that will further facilitate the democratization of AI. Singapore's National Research Foundation launched AI Singapore - a national program to enhance the country's AI capabilities - to make its LLMs more culturally accurate, localized and tailored to Southeast Asia. AWS is working with Singapore public organizations to develop innovative, industry-first solutions powered by AI and gen AI, including AI Singapore's SEA-LION. Building on AWS' scalable compute infrastructure, SEA-LION is a family of LLMs that is specifically pre-trained and instruct-tuned for Southeast Asian languages and cultures. WS released the Amazon Bedrock managed service to support gen AI deployments for large enterprises. It now provides easy access to multiple large language models and foundation models from AI21 Labs, Anthropic, Cohere, Meta and Stability AI through a single API, along with a broad set of capabilities organizations need to build gen AI applications with security, privacy and responsible AI.


Fortifying the Weakest Link: How to Safeguard Against Supply Chain Cyberattacks

Failures in systems and processes by third parties can lead to catastrophic reputational and operational damage. It is no longer sufficient to merely implement basic vendor management procedures. Organizations must also take proactive measures to safeguard against third-party control failures. ... Protect administrative access to the tools and applications used by DevOps teams. Enable secure application configuration via secrets and authenticate applications and services with high confidence. Mandate that software suppliers certify and extend security controls to cover microservices, cloud, and DevOps environments. ... Ensure that your systems and those of your suppliers are regularly updated and patched for known vulnerabilities. Prevent the use of unsupported or outdated software that could introduce new vulnerabilities. ... Configure cloud environments to reject authorization requests involving tokens that deviate from accepted norms. For on-premises systems, follow the National Security Agency’s guidelines by deploying a Federal Information Processing Standards (FIPS)-validated Hardware Security Module (HSM) to store token-signing certificate private keys. HSMs significantly reduce the risk of key theft by threat actors.


Are hardware supply chain attacks “cyber attacks?”

In the case of hardware supply chain attacks, malicious actors infiltrate the supply of devices, or the physical manufacturing process of pieces of hardware and purposefully build in security flaws, faulty parts, or backdoors they know they can take advantage of in the future, such as malicious microchips on a circuit board. For Cisco’s part, the Cisco Trustworthy technologies program, including secure boot, Cisco Trust Anchor module (TAm), and runtime defenses give customers the confidence that the product is genuinely from Cisco. As I was thinking about the threat of hardware supply chain attacks, I was left wondering who, exactly, should be tasked with solving this problem. And I think I’ve decided the onus falls on several different sectors. It shouldn’t just be viewed as a cybersecurity issue, because for a hardware supply chain attack, an adversary would likely need to physically infiltrate or tamper with the manufacturing process. Entering a manufacturing facility or other stops along the logistics chain would require some level of network-level manipulation, such as faking a card reader or finding a way to trick physical defenses — that’s why Cisco Talos Incident Response looks for these types of things in Purple Team exercises.


How The Digital Twin Helps Build Resilient Manufacturing Operations

The digital twin is a sophisticated tool. It must be a true working virtual replica of the physical asset. Anything short of that means problems. To make it all work, consider several key aspects. You will most likely need multiple digital twins of the same physical asset. At least one digital twin should be online most of the time, collecting data from the real world. Other copies of the digital twin might be offline at times, but they use the real-world data in various training situations and for optimizing the equipment and the line. Getting data from the real world into the digital twin is one of the best and most common uses for the Industrial Internet of Things (IIoT). The latest digital twins are incorporating AI to help optimize the design process, learn from previous designs and create new equipment designs. AI helps create operator training scenarios and optimizes the equipment and production line. AI learns from the optimization process and, even with new wrinkles thrown into the real world, learns how to optimize the optimization process. It helps troubleshoot the equipment, finding problems quickly, long before they become problems.


3 tips for securing IoT devices in a connected world

Comprehensive visibility refers to an organization’s ability to identify, monitor and remotely manage each individual device connected to its network. Gaining this level of visibility is a crucial first step for maintaining a robust security posture and preventing unauthorized access or potential breaches. ... Addressing common vulnerabilities like built-in backdoors and unpatched firmware is essential for maintaining the security of connected devices. Built-in backdoors are hidden or undocumented access points in a device’s software or firmware that allow unauthorized access to the device or its network. These backdoors are often left by manufacturers for maintenance or troubleshooting purposes but can be exploited by attackers if not properly secured. ... One important step in secure deployment is limiting access to critical resources using network segmentation. Network segmentation involves dividing a network into smaller, isolated segments or subnets, each with its own security controls. This practice limits the movement of threats across the network, reducing the risk of a compromised IoT device leading to a broader security breach. 


Why countries are in a race to build AI factories in the name of sovereign AI

“The number of sovereign AI clouds is really quite significant,” Huang said in the earnings call. He said Nvidia wants to enable every company to build its own custom AI models. The motivations weren’t just about keeping a country’s data in local tech infrastructure to protect it. Rather, they saw the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation, said Colette Kress, CFO of Nvidia, in the earnings call. That was around the time when the Biden administration was restricting sales of the most powerful AI chips to China, requiring a license from the U.S. government before shipments could happen. That licensing requirement is still in effect. As a result, China reportedly began its own attempts to create AI chips to compete with Nvidia’s. But it wasn’t just China. Kress also said Nvidia was working with the Indian government and its large tech companies like Infosys, Reliance and Tata to boost their “sovereign AI infrastructure.” Meanwhile, French private cloud provider Scaleway was investing in regional AI clouds to fuel AI advances in Europe as part of a “new economic imperative,” Kress said. 


Is Spring AI Strong Enough for AI?

While the Spring framework itself does not have a dedicated AI library, it has proven to be an effective platform for developing AI-driven systems when combined with robust AI/ML frameworks. Spring Boot and Spring Cloud provide essential capabilities for deploying AI/ML models, managing REST APIs, and orchestrating microservices, all of which are crucial components for building and deploying production-ready AI systems. ... Spring, typically known as a versatile enterprise framework, showcases its effectiveness in high-quality AI deployments when combined with its robust scalability, security, and microservice architecture features. Its seamless integration with machine learning models, especially through REST APIs and cloud infrastructure, positions it as a formidable choice for enterprises seeking to integrate AI with intricate business systems. Nevertheless, for more specialized tasks such as model versioning, training orchestration, and rapid prototyping, AI-specific frameworks like TensorFlow Serving, Kubernetes, and MLflow offer tailored solutions that excel in high-performance model serving, distributed AI workflows, and streamlined management of the complete machine learning lifecycle with minimal manual effort.


Top Skills Chief AI Officers Must Have to Succeed in Modern Workplace

Domain knowledge is obviously vital. Possessing an understanding of core AI concepts is a must. Machine learning (ML), data analytics, and software development are elementary requirements a capable CAIO will leverage for specific business goals. Given the incipient stage that AI transformation is at, candidates will have to supplement their knowledge with continuous learning, adaptability, and initiative. Notably, a CAIO must use their expertise to arrive at data-driven decisions—it sets a good professional apart and highlights their capacity to troubleshoot accurately. ... A CAIO must translate AI concepts into clear strategies, prioritizing among multiple potential implementations based on their judgment of what will deliver the greatest value. This involves setting concrete goals such as improved efficiency, enhanced customer engagement, or increased employee productivity, and devising a roadmap to achieve them. ... Beyond the technical knowledge and strategic acumen, a powerful grasp of how business processes work within an organisation and why they function the way they do is crucial. CAIOs must foremost align with this culture and find ways to integrate AI within that framework.


5 Ways to Keep Global Development Teams Productive

A significant challenge for global development teams is ensuring smooth collaboration between different locations. Without the right tools and processes, team members can experience delays due to time zone differences, slow data access, or inconsistent version control systems. To improve collaboration, development teams should implement systems that provide fast, reliable access to codebases, regardless of location. Real-time collaboration tools that synchronize work across global teams are essential. For instance, platforms that replicate repositories in real-time across different sites ensure that all team members are working with the latest version of the code, reducing the risk of inconsistencies. ... Compliance with data protection laws, such as the GDPR or CCPA, is also essential for companies working across borders. Development teams need to be mindful of where data is stored and ensure that their tools meet the necessary compliance requirements. Security policies should be applied consistently across all locations to prevent breaches and data leaks, which can lead to significant financial and reputational damage.



Quote for the day:

“Without continual growth and progress, such words as improvement, achievement, and success have no meaning.” -- Benjamin Franklin

Daily Tech Digest - September 25, 2024

When technical debt strikes the security stack

“Security professionals are not immune from acquiring their own technical debt. It comes through a lack of attention to periodic reviews and maintenance of security controls,” says Howard Taylor, CISO of Radware. “The basic rule is that security rapidly decreases if it is not continuously improved. The time will come when a security incident or audit will require an emergency collection of the debt.” ... The paradox of security technical debt is that many departments concurrently suffer from both solution debt that causes gaps in coverage or capabilities, as well as rampant tool sprawl that eats up budget and makes it difficult to effectively use tools. ... “Detection engineering is often a large source of technical debt: over the years, a great detection engineering team can produce many great detections, but the reliability of those detections can start to fade as the rest of the infrastructure changes,” he says. “Great detections become less reliable over time, the authors leave the company, and the detection starts to be ignored. This leads to waste of energy and very often cost.” ... Role sprawl is another common scenario that contributes significantly to security debt, says Piyush Pandey, CEO at Pathlock.


Google Announces New Gmail Security Move For Millions

From the Gmail perspective, the security advisor will include a security sandbox where all email attachments will be scanned for malicious software employing a virtual environment to safely analyze said files. Google said the tool can “delay message delivery, allow customization of scan rules, and automatically move suspicious messages to the spam folder.” Gmail also gets enhanced safe browsing which gives additional protection by scanning incoming messages for malicious content before it is actually delivered. ... A Google spokesperson told me that the AI Geminin app is to get enterprise-grade security protections in core services now. With availability from October 15, for customers running on a Workspace Business, Enterprise, or Frontline plan, Google said that “with all of the core Workspace security and privacy controls in place, companies have the tools to deploy AI securely, privately and responsibly in their organizations in the specific way that they want it.” The critical components of this security move include ensuring Gemini is subject to the same privacy, security, and compliance policies as the rest of the Workspace core services, such as Gmail and Dos. 


The Next Big Interconnect Technology Could Be Plastic

e-Tube technology is a new, scalable interconnect platform that uses radio wave transmission over a dielectric waveguide made of – drumroll – common plastic material such as low-density polyethylene (LDPE). While waveguide theory has been studied for many years, only a few organizations have applied the technology for mainstream data interconnect applications. Because copper and optical interconnects are historically entrenched technologies, most research has focused on extending copper life or improving energy and cost efficiency of optical solutions. But now there is a shift toward exploring the e-Tube option that delivers a combination of benefits that copper and optical cannot, including energy-efficiency, low latency, cost-efficiency and scalability to multi-terabit network speeds required in next-gen data centers. The key metrics for data center cabling are peak throughput, energy efficiency, low latency, long cable reach and cost that enables mass deployment. Across these metrics, e-Tube technology provides advantages compared to copper and optical technologies. Traditionally, copper-based interconnects have been considered an inexpensive and reliable choice for short-reach data center applications, such as top-of-rack switch connections. 


From Theory to Action: Building a Strong Cyber Risk Governance Framework

Setting your risk appetite is about more than just throwing a number out there. It’s about understanding the types of risks you face and translating them into specific, measurable risk tolerance statements. For example, “We’re willing to absorb up to $1 million in cyber losses annually but no more.” Once you have that in place, you’ll find decision-making becomes much more straightforward. ... If your current cybersecurity budget isn't sufficient to handle your stated risk appetite, you may need to adjust it. One of the best ways to determine if your budget aligns with your risk appetite is by using loss exceedance curves (LECs). These handy charts allow you to visualize the forecasted likelihood and impact of potential cyber events. They help you decide where to invest more in cybersecurity and perhaps where even to cut back. ... One thing that a lot of organizations miss in their cyber risk governance framework is the effective use of cyber insurance. Here's the trick: cyber insurance shouldn’t be used to cover routine losses. Doing so will only lead to increased premiums. Instead, it should be your safety net for the larger, more catastrophic incidents – the kinds that keep executives awake at night.


Is Prompt Engineering Dead? How To Scale Enterprise GenAI Adoption

If you pick a model that is a poor fit for your use case, it will not be good at determining the context of the question and will fail at retrieving a reference point for the response. In those situations, the lack of reference data needed for providing an accurate response contributes to a hallucination. While there are many situations where you would prefer the model to give no response at all rather than fabricate one, what happens if there is no exact answer available is that the model will take some data points that it thinks are contextually relevant to the query and return an inaccurate answer. ... To leverage LLMs effectively at an enterprise scale, businesses need to understand their limitations. Prompt engineering and RAG can improve accuracy, but LLMs must be tightly limited in domain knowledge and scope. Each LLM should be trained for a specific use case, using a specific dataset with data owners providing feedback. This ensures no chance of confusing the model with information from different domains. The training process for LLMs differs from traditional machine learning, requiring human oversight and quality assurance by data owners.


AI disruption in Fintech: The dawn of smarter financial solutions

Financial institutions face diverse fraud challenges, from identity theft to fund transfer scams. Manual analysis of countless daily transactions is impractical. AI-based systems are empowering Fintechs to analyze data, detect anomalies, and flag suspicious activities. AI is monitoring transactions, filtering spam, and identifying malware. It can recognise social engineering patterns and alert users to potential threats. While fraudsters also use AI for sophisticated scams, financial institutions can leverage AI to identify synthetic content and distinguish between trustworthy and untrustworthy information. ... AI is transforming fintech customer service, enhancing retention and loyalty. It provides personalised, consistent experiences across channels, anticipating needs and offering value-driven recommendations. AI-powered chatbots handle common queries efficiently, allowing human agents to focus on complex issues. This technology enables 24/7 support across various platforms, meeting customer expectations for instant access. AI analytics predict customer needs based on financial history, transaction patterns, and life events, enabling targeted, timely offers. 


CIOs Go Bold

In business, someone who is bold is an individual who exudes confidence and assertiveness and is business savvy. However, there is a fine line between being assertive and confident in a way that is admired and being perceived as overbearing and hard to work with. ... If your personal CIO goals include being bolder, the first step is for you to self-assess. Then, look around. You probably already know individuals in the organization or colleagues in the C-suite who are perceived as being bold shakers and movers. What did they do to acquire this reputation? ... To get results from the ideas you propose, the outcomes of your ideas must solve strategic goals and/or pain points in the business. Consequently, the first rule of thumb for CIOs is to think beyond the IT box. Instead, ask questions like how an IT solution can help solve a particular challenge for the business. Digitalization is a prime example. Early digitalization projects started out with missions such as eliminating paper by digitalizing information and making it more searchable and accessible. Unfortunately, being able to search and access data was hard to quantify in terms of business results. 


What does the Cyber Security and Resilience Bill mean for businesses?

The Bill aims to strengthen the UK’s cyber defences by ensuring that critical infrastructure and digital services are secure by protecting those services and supply chains. It’s expected to share common ground with NIS2 but there are also some elements that are notably absent. These differences could mean the Bill is not quite as burdensome as its European counterpart but equally, it runs the risk of making it not as effective. ... The problem now is that many businesses will be looking at both sets of regulations and scratching their heads in confusion. Should they assume that the Bill will follow the trajectory of NIS2 and make preparations accordingly or should they assume it will continue to take a lighter touch and one that may not even apply to them? There’s no doubt that NIS2 will introduce a significant compliance burden with one report suggesting it will cost upwards of 31.2bn euros per year. Then there’s the issue of those that will need to comply with both sets of regulations i.e. those entities that either supply to customers or have offices on the continent. They will be looking for the types of commonalities we’ve explored here in order to harmonise their compliance efforts and achieve economies of scale. 


3 Key Practices for Perfecting Cloud Native Architecture

As microservices proliferate, managing their communication becomes increasingly complex. Service meshes like Istio or Linkerd offer a solution by handling service discovery, load balancing, and secure communication between services. This allows developers to focus on building features rather than getting bogged down by the intricacies of inter-service communication. ... Failures are inevitable in cloud native environments. Designing microservices with fault isolation in mind helps prevent a single service failure from cascading throughout the entire system. By implementing circuit breakers and retry mechanisms, organizations can enhance the resilience of their architecture, ensuring that their systems remain robust even in the face of unexpected challenges. ... Traditional CI/CD pipelines often become bottlenecks during the build and testing phases. To overcome this, modern CI/CD tools that support parallel execution should be leveraged. ... Not every code change necessitates a complete rebuild of the entire application. Organizations can significantly speed up the pipeline while conserving resources by implementing incremental builds and tests, which only recompile and retest the modified portions of the codebase.


Copilots and low-code apps are creating a new 'vast attack surface' - 4 ways to fix that

"In traditional application development, apps are carefully built throughout the software development lifecycle, where each app is continuously planned, designed, implemented, measured, and analyzed," they explain. "In modern business application development, however, no such checks and balances exists and a new form of shadow IT emerges." Within the range of copilot solutions, "anyone can build and access powerful business apps and copilots that access, transfer, and store sensitive data and contribute to critical business operations with just a couple clicks of the mouse or use of natural language text prompts," the study cautions. "The velocity and magnitude of this new wave of application development creates a new and vast attack surface." Many enterprises encouraging copilot and low-code development are "not fully embracing that they need to contextualize and understand not only how many apps and copilots are being built, but also the business context such as what data the app interacts with, who it is intended for, and what business function it is meant to accomplish."



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - September 24, 2024

Effective Strategies for Talking About Security Risks with Business Leaders

Like every difficult conversation, CISOs must pick the right time, place and strategy to discuss cyber risks with the executive team and staff. Instead of waiting for the opportunity to arise, CISOs should proactively engage with individuals at all levels of the organization to influence them and ensure an understanding of security policies and incident response. These conversations could come in the form of monthly or quarterly meetings with senior stakeholders to maintain the cadence and consistency of the conversations, discuss how the threat landscape is evolving and review their part of the business through a cybersecurity lens. They could also be casual watercooler chats with staff members, which not only help to educate and inform employees but also build vital internal relationships that can affect online behaviors. In addition to talking, CISOs must also listen to and learn about key stakeholders to tailor conversations around their interests and concerns. ... If you're talking to the board, you'll need to know the people around that table. What are their interests, and how can you communicate in a way that resonates with them and gets their attention? Use visualization techniques and find a "cyber ally" on the board who will back you and help reinforce your ideas and the information you share.


Is Explainable AI Explainable Enough Yet?

“More often than not, the higher the accuracy provided by an AI model, the more complex and less explainable it becomes, which makes developing explainable AI models challenging,” says Godbole. “The premise of these AI systems is that they can work with high-dimensional data and build non-linear relationships that are beyond human capabilities. This allows them to identify patterns at a large scale and provide higher accuracy. However, it becomes difficult to explain this non-linearity and provide simple, intuitive explanations in understandable terms.” Other challenges are providing explanations that are both comprehensive and easily understandable and the fact that businesses hesitate to explain their systems fully for fear of divulging intellectual property (IP) and losing their competitive advantage. “As we make progress towards more sophisticated AI systems, we may face greater challenges in explaining their decision-making processes. For autonomous systems, providing real-time explainability for critical decisions could be technically difficult, even though it will be highly necessary,” says Godbole. When AI is used in sensitive areas, it will become increasingly important to explain decisions that have significant ethical implications, but this will also be challenging.


The challenge of cloud computing forensics

Data replication across multiple locations complicates forensics processes that require the ability to pinpoint sources for analysis. Consider the challenge of retrieving deleted data from cloud systems—not just a technical obstacle, but a matter of accountability that is often not addressed by IT until it’s too late. Multitenancy involves shared resources among multiple users, making it difficult to distinguish and segregate data. This is a systemic problem for cloud security, and it is particularly problematic for cloud platform forensics. The NIST document acknowledges this challenge and recommends the implementation of access mechanisms and frameworks so companies can maintain data integrity and manage incident response. Thus, the mechanisms are in place to deal with issues once they occur because accounting happens on an ongoing basis. The lack of location transparency is a nightmare. Data resides in various physical jurisdictions, all with different laws and cultural considerations. Crimes may occur on a public cloud point of presence in a country that disallows warrants to examine the physical systems, whereas other countries have more options for law enforcement. Guess which countries the criminals choose to leverage.


Is the rise of genAI about to create an energy crisis?

Though data center power consumption is expected to double by 2028, according to IDC research director Sean Graham, it’s still a small percentage of overall energy consumption — just 18%. “So, it’s not fair to blame energy consumption on AI,” he said. “Now, I don’t mean to say AI isn’t using a lot of energy and data centers aren’t growing at a very fast rate. Data Center energy consumption is growing at 20% per year. That’s significant, but it’s still only 2.5% of the global energy demand. “It’s not like we can blame energy problems exclusively on AI,” Graham said. ... Beyond the pressure from genAI growth, electricity prices are rising due to supply and demand dynamics, environmental regulations, geopolitical events, and extreme weather events fueled in part by climate change, according to an IDC study published today. IDC believes the higher electricity prices of the last five years are likely to continue, making data centers considerably more expensive to operate. Amid that backdrop, electricity suppliers and other utilities have argued that AI creators and hosts should be required to pay higher prices for electricity — as cloud providers did before them — because they’re quickly consuming greater amounts of compute cycles and, therefore, energy compared to other users.


20 Years in Open Source: Resilience, Failure, Success

The rise of Big Tech has emphasized one of the most significant truths I’ve learned: the need for digital sovereignty. Over time, I’ve observed how centralized platforms can slowly erode consumers’ authority over their data and software. Today, more than ever, I believe that open source is a crucial path to regaining control — whether you’re an individual, a business, or a government. With open source software, you own your infrastructure, and you’re not subject to the whims of a vendor deciding to change prices, terms, or even direction. I’ve learned that part of being resilient in this industry means providing alternatives to centralized solutions. We built CryptPad — to offer an encrypted, privacy-respecting alternative to tools like Google Docs. It hasn’t been easy, but it’s a project I believe in because it aligns with my core belief: people should control their data. I would improve the way the community communicates the benefits of open source. The conversation all too frequently concentrates on “free vs. paid” software. In reality, what matters is the distinction between dependence and freedom. I’ve concluded that we need to explain better how individuals may take charge of their data, privacy, and future by utilizing open source.


20 Tech Pros On Top Trends In Software Testing

The shift toward AI-driven testing will revolutionize software quality assurance. AI can intelligently predict potential failures, adapt to changes and optimize testing processes, ensuring that products are not only reliable, but also innovative. This approach allows us to focus on creating user experiences that are intuitive and delightful. ... AI-driven test automation has been the trend that almost every client of ours has been asking for in the past year. Combining advanced self-healing test scripts and visual testing methodologies has proven to improve software quality. This process also reduces the time to market by helping break down complex tasks. ... With many new applications relying heavily on third-party APIs or software libraries, rigorous security auditing and testing of these services is crucial to avoid supply chain attacks against critical services. ... One trend that will become increasingly important is shift-left security testing. As software development accelerates, security risks are growing. Integrating security testing into the early stages of development—shifting left—enables teams to identify vulnerabilities earlier, reduce remediation costs and ensure secure coding practices, ultimately leading to more secure software releases.


How to manage shadow IT and reduce your attack surface

To effectively mitigate the risks associated with shadow IT, your organization should adopt a comprehensive approach that encompasses the following strategies:Understanding the root causes: Engage with different business units to identify the pain points that drive employees to seek unauthorized solutions. Streamline your IT processes to reduce friction and make it easier for employees to accomplish their tasks within approved channels, minimizing the temptation to bypass security measures. Educating employees: Raise awareness across your organization about the risks associated with shadow IT and provide approved alternatives. Foster a culture of collaboration and open communication between IT and business teams, encouraging employees to seek guidance and support when selecting technology solutions. Establishing clear policies: Define and communicate guidelines for the appropriate use of personal devices, software, and services. Enforce consequences for policy violations to ensure compliance and accountability. Leveraging technology: Implement tools that enable your IT team to continuously discover and monitor all unknown and unmanaged IT assets. 


How software teams should prepare for the digital twin and AI revolution

By integrating AI to enhance real-time analytics, users can develop a more nuanced understanding of emerging issues, improving situational awareness and allowing them to make better decisions. Using in-memory computing technology, digital twins produce real-time analytics results that users aggregate and query to continuously visualize the dynamics of a complex system and look for emerging issues that need attention. In the near future, generative AI-driven tools will magnify these capabilities by automatically generating queries, detecting anomalies, and then alerting users as needed. AI will create sophisticated data visualizations on dashboards that point to emerging issues, giving managers even better situational awareness and responsiveness. ... Digital twins can use ML techniques to monitor thousands of entry points and internal servers to detect unusual logins, access attempts, and processes. However, detecting patterns that integrate this information and create an overall threat assessment may require data aggregation and query to tie together the elements of a kill chain. Generative AI can assist personnel by using these tools to detect unusual behaviors and alert personnel who can carry the investigation forward.


The Open Source Software Balancing Act: How to Maximize the Benefits And Minimize the Risks

OSS has democratized access to cutting-edge technologies, fostered a culture of collaboration and empowered businesses to prioritize innovation. By tapping into the vast pool of open source components available, software developers can accelerate product development, minimize time-to-market and drive innovation at scale. ... Paying down technical debt requires two things, consistency and prioritization. First, organizations should opt for fewer high-quality suppliers with well-maintained open source projects because they have greater reliability and stability, reducing the likelihood of introducing bugs or issues into their own codebase that rack up tech debt. In terms of transparency, organizations must have complete visibility into their software infrastructure. This is another area where SBOMs are key. With an SBOM, developers have full visibility into every element of their software, which reduces the risk of using outdated or vulnerable components that contribute to technical debt. There’s no question that open source software offers unparalleled opportunities for innovation, collaboration and growth within the software development ecosystem. 


Is AI really going to burn the planet?

Trying to understand exactly how energy-intensive the training of datasets is, is even more complex than understanding exactly how big data center GHG sins are. A common “AI is environmentally bad” statistic is that training a large language model like GPT-3 is estimated to use just under 1,300-megawatt hours (MWh) of electricity, about as much power as consumed annually by 130 US homes, or the equivalent of watching 1.63 million hours of Netflix. The source for this stat is AI company Hugging Face, which does seem to have used some real science to arrive at these numbers. It also, to quote a May Hugging Face probe into all this, seems to have proven that "multi-purpose, generative architectures are orders of magnitude more [energy] expensive than task-specific systems for a variety of tasks, even when controlling for the number of model parameters.” It’s important to note that what’s being compared here are task-specific AI runs (optimized, smaller models trained in specific generative AI tasks) and multi-purpose (a machine learning model that should be able to process information from different modalities, including images, videos, and text).



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls