Daily Tech Digest - January 23, 2025


Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham


Cyber Insights 2025: APIs – The Threat Continues

APIs are easily written, often with low-code / no-code tools. They are often considered by the developer as unimportant in comparison to the apps they connect, and probably protected by the tools that protect the apps. Bad call. “API attacks will increase in 2025 due to this over-reliance on existing application security and API management tools, but also due to organizations dragging their heels when it comes to protecting APIs,” says James Sherlow, systems engineering director of EMEA at Cequence Security. “While there was plenty of motivation to roll out APIs to stand up new services and support revenue streams, the same incentives are not there when it comes to protecting them.” Meanwhile, attackers are becoming increasingly sophisticated in their attacks. “In contrast, threat actors are not resting on their laurels,” he continued. “It’s now not uncommon for them to use multi-faceted attacks that seek to evade detection and then dodge and feint when the attack is blocked, all the time waiting until the last minute to target their end goal.” In short, he says, “It’s not until the business is breached that it wakes up to the fact that API protection and application protection are not one and the same thing. Web Application Firewalls, Content Delivery Networks, and API Gateways do not adequately protect APIs.”


Box-Checking or Behavior-Changing? Training That Matters

The pressure to meet these requirements is intense, and when a company finds an “acceptable” solution, they too often just check the box knowing they are compliant and stick with that solution in perpetuity - whether it creates a more secure workplace and behavioral change or not. Training programs designed purely to meet regulations are rarely effective. These initiatives tend to rely on generic content that employees skim through and forget. Organizations may meet the legal standard, but they fail to address the root causes of risky behavior. ... To improve outcomes, training programs must connect with people on a more practical level. Tailoring the content to fit specific roles within the organization is one way to do this. The threats a finance team faces, for example, are different from those encountered by IT professionals, so their training should reflect those differences. When employees see the relevance of the material, they are more likely to engage with it. Professionals in security awareness roles can distinguish themselves by designing programs that meet these needs. Equally important is embracing the concept of continuous learning. Annual training sessions often fail to stick. Smaller, ongoing lessons delivered throughout the year help employees retain information and incorporate it into their daily routines. 


OpenAI opposes data deletion demand in India citing US legal constraints

OpenAI has informed the Delhi High Court that any directive requiring it to delete training data used for ChatGPT would conflict with its legal obligations under US law. The statement came in response to a copyright lawsuit filed by the Reuters-backed Indian news agency ANI, marking a pivotal development in one of the first major AI-related legal battles in India. ... This case mirrors global legal trends, as OpenAI faces similar lawsuits in the United States and beyond, including from major organizations like The New York Times. OpenAI maintains its position that it adheres to the “fair use” doctrine, leveraging publicly available data to train its AI systems without infringing intellectual property laws. In the case of Raw Story Media v. OpenAI, heard in the Southern District of New York, the plaintiffs accused OpenAI of violating the Digital Millennium Copyright Act (DMCA) by stripping copyright management information (CMI) from their articles before using them to train ChatGPT. ... In the ANI v OpenAI case, the Delhi High Court has framed four key issues for adjudication, including whether using copyrighted material for training AI models constitutes infringement and whether Indian courts have jurisdiction over a US-based company. Nath’s view aligns with broader concerns over how existing legal frameworks struggle to keep pace with AI advancements.


Defense strategies to counter escalating hybrid attacks

Threat actor profiling plays a pivotal role in uncovering hybrid operations by going beyond surface-level indicators and examining deeper contextual elements. Profiling involves a thorough analysis of the actor’s history, their strategic objectives, and their operational behaviors across campaigns. For example, understanding the geopolitical implications of a ransomware attack targeting a defense contractor can reveal espionage motives cloaked in financial crime. Profiling allows researchers to differentiate between purely financial motivations and state-sponsored objectives masked as criminal operations. Hybrid actors often leave “behavioral fingerprints” – unique combinations of techniques and infrastructure reuse – that, when analyzed within the context of their history, can expose their true intentions. ... Threat intelligence feeds enriched with historical data can help correlate real-time events with known threat actor profiles. Additionally, implementing deception techniques, such as industry-specific honeypots, can reveal operational objectives and distinguish between actors based on their response to decoys. ... Organizations must adapt by adopting a defense-in-depth strategy that combines proactive threat hunting, continuous monitoring, and incident response preparedness.


4 Cybersecurity Misconceptions to Leave Behind in 2025

Workers need to avoid falling into a false sense of security, and organizations must ensure that they are frequently updating advice and strategies to reduce the likelihood of their employees falling victim. In addition, we found that this confidence doesn’t necessarily translate into action. A notable portion of those surveyed (29%) admit that they don’t report suspicious messages even when they do identify a phishing scam, despite the presence of convenient reporting tools like “report phishing” buttons. ... Our second misconception stems from workers’ sense of helplessness. This kind of cyber apathy can become a dangerous self-fulfilling prophecy if left unaddressed. The key problem is that even if it’s true that information is already online, this isn’t equivalent to being directly under threat, and there are different levels of risk. It’s one thing knowing someone has your home address; knowing they have your front door key in their pocket is quite another. Even if it’s hard to keep all of your data hidden, that doesn’t mean it’s not worth taking steps to keep key information protected. While it can seem impossible to stay safe when so much personal data is publicly available, this should be the impetus to bolster cybersecurity practices, such as not including personal information in passwords.


Real datacenter emissions are a dirty secret

With legislation such as the EU's Corporate Sustainability Reporting Directive (CSRD) now in force, customers and resellers alike are expecting more detailed carbon emissions reporting across all three Scopes from suppliers and vendors, according to Canalys. This expectation of transparency is increasingly important in vendor selection processes because customers need their vendors to share specific numbers to quantify the environmental impact of their cloud usage. "AWS has continued to fall behind its competitors here by not providing Scope 3 emissions data via its Customer Carbon Footprint Tool, which is still unavailable," Caddy claimed. "This issue has frustrated sustainability-focused customers and partners alike for years now, but as companies prepare for CSRD disclosure, this lack of granular emissions disclosure from AWS can create compliance challenges for EU-based AWS customers." We asked Amazon why it doesn't break out the emissions data for AWS separately from its other operations, but while the company confirmed this is so, it declined to offer an explanation. Neither did Microsoft nor Google. In a statement, an AWS spokesperson told us: "We continue to publish a detailed, transparent report of our year-on-year progress decarbonizing our operations, including across our datacenters, in our Sustainability Report. 


5 hot network trends for 2025

AI will generate new levels of network traffic, new requirements for low latency, and new layers of complexity. The saving grace, for network operators, is AIOps – the use of AI to optimize and automate network processes. “The integration of artificial intelligence (AI) into IT operations (ITOps) is becoming indispensable,” says Forrester analyst Carlos Casanova. “AIOps provides real-time contextualization and insights across the IT estate, ensuring that network infrastructure operates at peak efficiency in serving business needs.” ... AIOps can deliver proactive issue resolution, it plays a crucial role in embedding zero trust into networks by detecting and mitigating threats in real time, and it can help network execs reach the Holy Grail of “self-managing, self-healing networks that could adapt to changing conditions and demands with minimal human intervention.” ... Industry veteran Zeus Kerravala predicts that 2025 will be the year that Ethernet becomes the protocol of choice for AI-based networking. “There is currently a holy war regarding InfiniBand versus Ethernet for networking for AI with InfiniBand having taken the early lead,” Kerravala says. Ethernet has seen tremendous advancements over the last few years, and its performance is now on par with InfiniBand, he says, citing a recent test conducted by World Wide Technology. 


Building the Backbone of AI: Why Infrastructure Matters in the Race for Adoption

One of the primary challenges facing businesses when it comes to AI is having the foundational infrastructure to make it work. Depending on the use case, AI can be an incredibly demanding technology. Some algorithmic AI workloads use real-time inference, which will grossly underperform without a direct, high bandwidth, low-latency connection. ... An organization’s path to the cloud is really the central pillar of any successful AI strategy. The sheer scale at which organizations are harvesting and using data means that storing every piece of information on-premises is simply no longer viable. Instead, cloud-based data lakes and warehouses are now commonly used to store data, and having streamlined access to this data is essential. But this shift isn’t just about scale or storage – it’s about capability. AI models, particularly those requiring intensive training, often reside in the cloud, where hyperscalers can offer the power density and GPU capabilities that on-premises data centers typically cannot support. Choosing the right cloud provider in this context is of course vital, but the real game-changer lies not in the who of connectivity, but the how. Relying on the public internet for cloud access creates bottlenecks and risks, with unpredictable routes, variable latency, and compromised security.


Why all developers should adopt a safety-critical mindset

Safety-critical industries don’t just rely on reactive measures; they also invest heavily in proactive defenses. Defensive programming is a key practice here, emphasizing robust input validation, error handling, and preparation for edge cases. This same mindset can be invaluable in non-critical software development. A simple input error could crash a service if not properly handled—building systems with this in mind ensures you’re always anticipating the unexpected. Rigorous testing should also be a norm, and not just unit tests. While unit testing is valuable, it's important to go beyond that, testing real-world edge cases and boundary conditions. Consider fault injection testing, where specific failures are introduced (e.g., dropped packets, corrupted data, or unavailable resources) to observe how the system reacts. These methods complement stress testing under maximum load and simulations of network outages, offering a clearer picture of system resilience. Validating how your software handles external failures will build more confidence in your code. Graceful degradation is another principle worth adopting. If a system does fail, it should fail in a way that’s safe and understandable. For example, an online payment system might temporarily disable credit card processing but allow users to save items in their cart or check account details.


Strengthening Software Supply Chains with Dependency Management

Organizations must prioritize proactive dependency management, high-quality component selection and vigilance against vulnerabilities to mitigate escalating risks. A Software Bill of Materials (SBOM) is an essential tool in this approach, as it offers a comprehensive inventory of all software components, enabling organizations to quickly identify and address vulnerabilities across their dependencies. In fact, projects that implement an SBOM to manage open source software dependencies demonstrate a 264-day reduction in the time taken to fix vulnerabilities compared to those that do not. SBOMs provide a comprehensive list of every component within the software, enabling quicker response times to threats and bolstering overall security. However, despite the rise in SBOM usage, it is not keeping pace with the influx of new components being created, highlighting the need for enhanced automation, tooling and support for open source maintainers. ... This complacency — characterized by a false sense of security — accumulates risks that threaten the integrity of software supply chains. The rise of open source malware further complicates the landscape, as attackers exploit poor dependency management. 

Daily Tech Digest - January 22, 2025

How Operating Models Need to Evolve in 2025

“In 2025, enterprises are looking to achieve autonomous and self-healing IT environments, which is currently referred to as ‘AIOps.’ However, the use of AI will become so common in IT operations that we won’t need to call it [that] explicitly,” says Ruh in an email interview. “Instead, the term, ‘AIOps’ will become obsolete over the next two years as enterprises move towards the first wave of AI agents, where early adopters will start deploying intelligent components in their landscape able to reason and take care of tasks with an elevated level of autonomy.” ... “The IT operating model of 2025 must adapt to a landscape shaped by rapid decentralization, flatter structures, and AI-driven innovation,” says Langley in an email interview. “These shifts are driven by the need for agility in responding to changing business needs and the transformative impact of AI on decision-making, coordination and communication. Technology is no longer just a tool but a connective tissue that enables transparency and autonomy across teams while aligning them with broader organizational goals.” ... “IT leaders must transition from traditional hierarchical roles to facilitators who harness AI to enable autonomy while maintaining strategic alignment. This means creating systems for collaboration and clarity, ensuring the organization thrives in a decentralized environment,” says Langley.


Cybersecurity is tough: 4 steps leaders can take now to reduce team burnout

Whether it’s about solidifying partnerships with business managers, changing corporate culture, or correcting errant employees, peer input is golden. No matter the scenario, it’s likely that other security leaders have dealt with the same or similar situations, so their input, empathy, and advice are invaluable. ... Well-informed leaders are more likely to champion and include security in new initiatives, an important shift in culture from seeing security as a pain to embracing security as an important business tool. Such a shift greatly reduces another top stressor among CISO’s — lack of management support. In a security-centric organization, team members in all roles experience less pressure to perform miracles with no resources. And, instead of fighting with leaders for resources, the CISO has more time to focus on getting to know and better manage staff. ... Recognition, she says, boosts individual and team morale and motivation. “I am grateful for and do not take for granted having excellent leadership above me that supports me and my team. I try to make it easy for them.” And, since personal stressors also impact burnout, she encourages team members to share their personal stressors at her one-on-ones or in the group meeting where they can be supported.  


Mandatory MFA, Biometrics Make Headway in Middle East, Africa

Digital identity platforms, such as UAE Pass in the United Arab Emirates and Nafath in Saudi Arabia, integrate with existing fingerprint and facial-recognition systems and can reduce the reliance on passwords, says Chris Murphy, a managing director with the cybersecurity practice at FTI Consulting in Dubai. "With mobile devices serving as the primary gateway to digital services, smartphone-based biometric authentication is the most widely used method in public and private sectors," he says. "Some countries, such as the UAE and Saudi Arabia, are early adopters of passwordless authentication, leveraging AI-based facial recognition and behavioral analytics for seamless and secure identity verification." African nations have also rolled out national identity cards based on biometrics. In South Africa, for example, customers can walk into a bank and open an account by using their fingerprint and linking it to the national ID database, which acts as the root of trust, says BIO-Key's Sullivan. "After they verify that that person is who they say they are with the Home Affairs Ministry, they can store that fingerprint [in the system]," he says. "From then on, anytime they want to authenticate that user, they just touch a finger. They've just now started rolling out the ability to do that without even presenting your card for subsequent business."


Acronis CISO on why backup strategies fail and how to make them resilient

Start by conducting a thorough business impact analysis. Figure out which processes, applications, and data sets are mission-critical, and decide how much downtime or data loss is acceptable. The more vital the data or application, the tighter (and more expensive) your RTO and RPO targets will be. Having a strong data and systems classification system will make this process significantly easier. There’s always a trade-off: the more stringent your RTO and RPO, the higher the cost and complexity of maintaining the necessary backup infrastructure. That’s why prioritisation is key. For example, a real-time e-commerce database might need near-zero downtime, while archived records can tolerate days of recovery time. Once you establish your priorities, you can use technologies like incremental backups, continuous data protection, and cross-site replication to meet tighter RTO and RPO without overwhelming your network or your budget. ... Start by reviewing any regulatory or compliance rules you must follow; these often dictate which data must be kept and for how long. Keep in mind, that some information may not be kept longer than absolutely needed – personally identifiable information would come to mind. Next, look at the operational value of your data. 


The bitter lesson for generative AI adoption

The rapid pace of innovation and the proliferation of new models have raised concerns about technology lock-in. Lock-in occurs when businesses become overly reliant on a specific model with bespoke scaffolding that limits their ability to adapt to innovations. Upon its release, GPT-4 was the same cost as GPT-3 despite being a superior model with much higher performance. Since the GPT-4 release in March 2023, OpenAI prices have fallen another six times for input data and four times for output data with GPT-4o, released May 13, 2024. Of course, an analysis of this sort assumes that generation is sold at cost or a fixed profit, which is probably not true, and significant capital injections and negative margins for capturing market share have likely subsidized some of this. However, we doubt these levers explain all the improvement gains and price reductions. Even Gemini 1.5 Flash, released May 24, 2024, offers performance near GPT-4, costing about 85 times less for input data and 57 times less for output data than the original GPT-4. Although eliminating technology lock-in may not be possible, businesses can reduce their grip on technology adoption by using commercial models in the short run.


Staying Ahead: Key Cloud-Native Security Practices

NHIs represent machine identities used in cybersecurity. They are conceived by combining a “Secret” (an encrypted password, token, or key) and the permissions allocated to that Secret by a receiving server. In an increasingly digital landscape, the role of these machine identities and their secrets cannot be overstated. This makes the management of NHIs a top priority for organizations, particularly those in industries like financial services, healthcare, and travel. ... As technology has advanced, so too has the need for more thorough and advanced cybersecurity practices. One rapidly evolving area is the management of Non-Human Identities (NHIs), which undeniably interweaves secret data. Understanding and efficiently managing NHIs and their secrets are not just choices but an imperative for organizations operating in the digital space and leaned towards cloud-native applications. NHIs have been sharing their secrets with us for some time, communicating an urgent requirement for attention, understanding and improved security practices. They give us hints about potential security weaknesses through unique identifiers that are not unlike a travel passport. By monitoring, managing, and securely storing these identifiers and the permissions granted to them, we can bridge the troublesome chasm between the security and R&D teams, making for better-protected organizations.


3 promises every CIO should keep in 2025

To minimize disappointment, technologists need to set the expectations of business leaders. At the same time, they need to evangelize on the value of new technology. “The CIO has to be an evangelist, educator, and realist all at the same time,” says Fernandes. “IT leaders should be under-hypers rather than over-hypers, and promote technology only in the context of business cases.” ... According to Leon Roberge, CIO for Toshiba America Business Solutions and Toshiba Global Commerce Solutions, technology leaders should become more visible to the business and lead by example to their teams. “I started attending the business meetings of all the other C-level executives on a monthly basis to make sure I’m getting the voice of the business,” he says. “Where are we heading? How are we making money? How can I help business leaders overcome their challenges and meet their objectives?” ... CIOs should also build platforms for custom tools that meet the specific needs not only of their industry and geography, but of their company — and even for specific divisions. AI models will be developed differently for different industries, and different data will be used to train for the healthcare industry than for logistics, for example. Each company has its own way of doing business and its own data sets. 


5G in Business: Roadblocks, Catalysts in Adoption - Part 1

Enterprises considering 5G adoption are confronted with several challenges, key among them being high capex, security, interoperability and integration with existing infrastructure, and skills development within their workforce. Consistent coverage and navigating the complex regulatory landscape are also inhibitors to adoption. Jenn Mullen, emerging technology solutions lead at Keysight Technologies, told ISMG that business leaders must address potential security concerns, ensure seamless integration with existing IT infrastructure and demonstrate a strong return on investment. ... Early enterprise 5G projects were unsuccessful as the applications and devices weren't 5G compatible. For instance, in 2021, ArcelorMittal France conceived 5G Steel, a private cellular network serving its steelworks in Dunkirk, Mardyck and Florange (France) - to support its digitalization plans with high-speed, site-wide 5G connectivity. The private network, which covers a 10 square kilometer area, was built by French public network operator Orange. When it turned the network on in October 2022, the connecting devices were only 4G, leading to underutilization. "The availability of 5G-compatible terminals suitable for use in an industrial environment is too limited," said David Glijer, the company's director of digital transformation at the time.


Rethinking Business Models With AI

We arrive in a new era of transforming business models and organizations by leveraging the power of Gen AI. An AI-powered business model is an organizational framework that fundamentally integrates AI into one or more core aspects of how a company creates, delivers and captures value. Unlike traditional business models that merely use AI as a tool for optimization, a truly AI-powered business model exhibits distinctive characteristics, such as self-reinforcing intelligence, scalable personalization and ecosystem integration. ... As an organization moves through its AI-powered business model innovation journey, it must systematically consider the eight essentials of AI-driven business models (Figure 3) and include a holistic assessment of current state capabilities, identification of AI innovation opportunities and development of a well-defined map of the transformation journey. Following this, rapid innovation sprints should be conducted to translate strategic visions into tangible results that validate the identified AI opportunities and de-risk at-scale deployments. ... While the potential rewards are compelling — from operational efficiencies to entirely new value propositions — the journey is complex and fraught with pitfalls, not least from existing barriers. 


Increase in cyberattacks setting the stage for identity security’s rapid growth

Digital identity security is rapidly growing in importance as identity infrastructure becomes a target for cyber attackers. Misconfigurations of identity systems have become a significant concern – but many companies still seem unaware of the issue. Security expert Hed Kovetz says that “identity is always the go-to of every attacker.” As CEO and co-founder of digital identity protection firm Silverfort, he believes that protecting identity is one of their most complicated tasks. “If you ask any security team, I think identity is probably the one that is the most complex,” says Kovetz. “It’s painful: There are so many tools, so many legacy technologies and legacy infrastructure still in place.” ... To secure identity infrastructures, security specialists need to deal with both very old and very new technologies consistently. Kovetz says he first began dealing with legacy systems that could not be properly secured and could be used by attackers to spread inside the network. He later extended to protecting and other modern technologies. “I think that protecting these things end to end is the key,” says Kovetz. “Otherwise, attackers will always go to the weaker part.” ... Although the increase in cyberattacks is setting the stage for identity security’s rapid growth in importance, some organizations are still struggling to acknowledge weaknesses in their identity infrastructure.



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley

Daily Tech Digest - January 21, 2025

AI comes alive: From bartenders to surgical aides to puppies, tomorrow’s robots are on their way

The current generation of robots face three key challenges: processing visual information quickly enough to react in real-time; understanding the subtle cues in human behavior; and adapting to unexpected changes in their environment. Most humanoid robots today are dependent on cloud computing and the resulting network latency can make simple tasks like picking up an object difficult. ... Gen AI powers spatial intelligence by helping robots map their surroundings in real-time, much like humans do, predicting how objects might move or change. Such advancements are crucial for creating autonomous humanoid robots capable of navigating complex, real-world scenarios with the adaptability and decision-making skills needed for success. While spatial intelligence relies on real-time data to build mental maps of the environment, another approach is to help the humanoid robot infer the real world from a single still image. As explained in a pre-published paper, Generative World Explorer (GenEx) uses AI to create a detailed virtual world from a single image, mimicking how humans make inferences about their surroundings. ... Beyond the purely technical obstacles, potential societal objections must be overcome. 


Why some companies are backing away from the public cloud

Technical debt may be the root of many moves back to on-premise environments. "Normally this is a self-inflicted thing," Linthicum said. "They didn't refactor the applications to make them more efficient in running on the public cloud providers. So the public cloud providers, much like if we're pulling too much electricity off the grid, just hit them with huge bills to support the computational and storage needs of those under-optimized applications." Rather than spending more money to optimize or refactor applications, these same enterprises put them back on-premise, said Linthicum. Security and compliance are also an issue. Enterprises "realize that it's too expensive to remain compliant in the cloud, with data and sovereignty rules. So, they just make a decision to push it back on-premise." The perceived high costs of cloud operations "often stem from lift-and-shift migrations that in some cases didn't optimize applications for cloud environments," said Miha Kralj, global senior partner for hybrid cloud service at IBM Consulting. "These direct transfers typically maintain existing architectures that don't leverage cloud-native capabilities, resulting in inefficient resource utilization and unexpectedly high expenses." However, the solution to this problem "isn't necessarily repatriation to on-premises infrastructure," said Kralj. 


7 Common Pitfalls in Data Science Projects — and How to Avoid Them

It's worth noting, too, that just because data is of low quality at the start of a project doesn't mean the project is bound to fail. There are many effective techniques for improving data quality, such as data cleansing and standardization. When projects fail, it's typically because they failed to assess data quality and improve it as needed, not because the data was so poor in quality that there was no saving it. ... There are two key stakeholders in any data science project — the IT department, which is responsible for managing data assets, and business users, who determine what the data science project should achieve. Unfortunately, poor collaboration between these groups can cause projects to fail. For example, IT departments might decide to impose access restrictions on data without consulting business users, leading to situations where the business can't actually use the data in the way it intends. Or lack of input from business stakeholders about what they want to do may cause the IT team to struggle to determine how to deliver the data resources necessary to support a project. ... A final key challenge that can thwart data science project success is the failure to understand what the goals of data science are, and which methodologies and resources data science requires.


Facial recognition for borders and travel: 2025 trends and insights

Seamless and secure border crossings are crucial for a thriving travel industry. However, border control processes that still rely on traditional manual checks pose unnecessary risks to both national security and traveler satisfaction. Slow and cumbersome identity verification conducted by humans leads to long lines and frustrated travelers. This is where biometrics come in. Biometric technologies, particularly facial recognition, are revolutionizing border security by providing a faster, more secure and more efficient approach to verifying traveler identities. As passenger volumes continue to rise globally, transportation authorities and immigration agencies quickly realize the value of onboarding facial recognition technology to streamline busy and mission-critical border crossings — helping improve throughput, reduce wait times and enhance the overall traveler experience. ... By adopting advanced facial recognition technologies, immigration authorities can: Improve traveler experience. Self-service authentication shortens wait times and delivers a satisfying, hassle-free journey. Deliver fast and reliable authentication. The entire process to authenticate an individual is now accomplished in seconds.
Enhance border security. 


AI-Driven Microservices: The Future of Cloud Scalability

Even with modern auto-scaling in cloud platforms, the limitations are clear. Scaling remains largely reactive, with additional servers spinning up only after demand spikes are detected. This lag leads to temporary throttling and performance degradation. During peak times, over-provisioning results in wasted CPU and server utilization during subsequent low-traffic periods. The inadequacy of threshold-based auto-scaling becomes particularly apparent during high-traffic events like holiday sales. Engineers often find themselves on-call to handle performance issues manually, adding operational overhead and delaying service recovery. These systems lack predictive capabilities and struggle to optimize cost and performance simultaneously. ... AI offers a solution to these challenges. Through my experience with cloud-native platforms, I have seen how AI can transform scaling capabilities by incorporating predictive analytics. Instead of waiting for problems to occur, AI-driven systems can analyze historical patterns, current trends and multiple data points to anticipate resource needs in advance. This innovation has particular significance for smaller enterprises, enabling them to compete effectively with larger organizations that have traditionally dominated due to superior infrastructure capabilities. 


More AI, More Problems for Software Developers in 2025

Using AI to generate code can leave users — especially more junior developers — without the context the code was written with and who it was written for, making it harder to figure out what’s gone wrong. The risk is generally higher for junior developers. Senior developers tend to have a much better awareness and quicker understanding of the code that’s generated,” Reynolds observed. “Junior developers are under a lot of pressure to get the job done. They want to move fast, and they don’t necessarily have that contextual awareness of the code change.” Without quality and governance controls — like security scans and dependency checks, and unit, systems and integration testing — deployed throughout the software development lifecycle, he warned, the wrong thing is often merged. ... Shadow IT has developers looking to engineer their way out of a problem by adopting — and often even paying for — tools that aren’t among those officially approved by their employers. Shadow AI is an extension that sees, the report found, 52% of developers using AI tools that aren’t provided by or explicitly approved by IT. It’s not like developers are behaving insubordinately. The reality is, three years into widespread adoption of generative AI, most organizations still don’t have GenAI policies.


7 top cybersecurity projects for 2025

To effectively secure AI workloads, security teams should first gain an understanding of AI use within their enterprise, as well as the data and models used to power their business. “Next, assemble a cross-functional team to assess risks and develop a comprehensive security strategy,” Ramamoorthy advises. “Following best practices and adopting a secure AI framework will help to enable a strong security foundation and ensure that when AI models are implemented, they are secure by default.” ... With a successful TPRM project, your enterprise will have a better security posture, with fewer vulnerabilities and proactive control over outside hazards, Saine says. TPRM, backed by real-time monitoring and the ability to quickly respond to developing hazards, can also ensure compliance with pertinent laws, reducing the risk of fines and legal headaches. “Compliance will also help your enterprise project credibility and dependability to clients and partners,” he says. ... When implementing trust-by-design principles with AI-powered systems, security leaders should align their goals with overall enterprise objectives while obtaining buy-in from key executives and stakeholders. Additionally, conducting thorough assessments of the development processes can help identify vulnerabilities while prioritizing remediation and controls. 


The Tech Blanket: Building a Seamless Tech Ecosystem

Traditionally, organizations have built their technology strategies around “tech stacks”—discrete tools for solving specific problems. While effective in the short term, this approach often creates silos, with each department operating within its own set of platforms. Knowledge and data are trapped, preventing the organization from realizing its full potential. In 2024, many companies recognized the limitations of this approach and began prioritizing integration. This trend will deepen in 2025 as businesses build interconnected ecosystems where tools work together harmoniously. According to Deloitte, 58% of companies are shifting their focus toward integrating their platforms into unified ecosystems rather than continuing to invest in standalone tools.  ... One of the biggest challenges in building a seamless tech ecosystem is ensuring that tools communicate effectively. Selecting platforms that support open APIs is essential for facilitating easy integration. Open APIs allow different systems to share data and work together, eliminating friction and enabling better collaboration. In practical terms, this means teams can pull insights from a centralized knowledge management platform into other tools, such as CRM systems or analytics dashboards, without additional manual effort. The result? A more connected organization that can move at the speed of business.


AI Poised to Deliver Value, Innovation to Software Industry in 2025

“IoT technology has created a new level of visibility into complex, live systems and enables vital insights. By providing real-time data streams for millions of devices, IoT enables them to be monitored for issues and controlled from a distance. This will lead to ever-increasing safety, security, and efficiency in their operation. Smart buildings, transportation systems, logistics networks, and countless other applications all benefit from using IoT to provide essential services at reasonable cost. ... “The demand for faster software development has become a serious industry threat, increasing code vulnerabilities and leading to avoidable security risks. This relentless development pace is unsustainable and only being accelerated by Generative AI. The more we speed up development and release cycles with GenAI and otherwise, the more code vulnerabilities are introduced, giving attackers more opportunities to execute their missions. ... “AI is poised to become a foundational business tool, joining virtualization, cloud computing, and containerization as essential layers of modern infrastructure. By 2025, startups and enterprises will routinely leverage AI for tasks like security, audits, and cost management. 


AI and cybersecurity: A double-edged sword

How exactly is AI tipping the scales in favor of cybersecurity professionals? For starters, it’s revolutionizing threat detection and response. AI systems can analyze vast amounts of data in real time, identifying potential threats with speed and accuracy. Companies like CrowdStrike have documented that their AI-driven systems can detect threats in under one second. But AI’s capabilities don’t stop at detection. When it comes to incident response, AI is proving to be a game-changer. Imagine a security system that doesn’t just alert you to a threat but takes immediate action to neutralize it. That’s the potential of AI-driven automated incident response. From isolating compromised systems to blocking malicious IP addresses, AI can execute these critical tasks swiftly and without human input, dramatically reducing response times and minimizing potential damage. ... AI is not just changing the skill set required for cybersecurity professionals, it’s augmenting it for the better. The ability to work alongside AI systems, interpret their outputs, and make strategic decisions based on AI-generated insights will be paramount for both users and experts. While AI is improving at its cybersecurity capabilities, a human paired with an AI tool will outperform AI by itself ten-fold.



Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein

Daily Tech Digest - January 20, 2025

Robots get their ‘ChatGPT moment’

Nvidia implies that Cosmos will usher in a “ChatGPT moment” for robotics. The company means that, just as the basic technology of neural networks existed for many years, Google’s Transformer model enabled radically accelerated training that led to LLM chatbots like ChatGPT. In the more familiar world of LLMs, we’ve come to understand the relationship between the size of the data sets used for training these models and the speed of that training and their resulting performance and accuracy. ... Driving in the real world with a person as backup is time-consuming, expensive, and sometimes dangerous — especially when you consider that autonomous vehicles need to be trained to respond to dangerous situations. Using Cosmos to train autonomous vehicles would involve the rapid creation of huge numbers of simulated scenarios. For example, imagine the simulation of every kind of animal that could conceivably cross a road — bears, dear, dogs, cats, lizards, etc. — in tens of thousands of different weather and lighting conditions. By the end of all this training, the car’s digital twin in Omniverse would be able to recognize and navigate scenarios of animals on the road regardless of the animal and the weather or time of day. That learning would then be transferred to thousands of real cars, which would also know how to navigate those situations.


How to Use AI in Cyber Deception

Adaptation is one of the most significant ways AI improves honey-potting strategies. Machine learning subsets can evolve alongside bad actors, enabling them to anticipate novel techniques. Conventional signature-based detection methods are less effective because they can only flag known attack patterns. Algorithms, on the other hand, use a behavior-based approach. Synthetic data generation is another one of AI’s strengths. This technology can produce honeytokens — digital artifacts purpose-built for deceiving would-be attackers. For example, it could create bogus credentials and a fake database. Any attempt to use those during login can be categorized as malicious because it means they used illegitimate means to gain access and exfiltrate the imitation data. While algorithms can produce an entirely synthetic dataset, they can also add certain characters or symbols to existing, legitimate information to make its copy more convincing. Depending on the sham credentials’ uniqueness, there’s little to no chance of false positives. Minimizing false positives is essential since most of the tens of thousands of security alerts professionals receive daily are inaccurate. This figure may be even higher for medium- to large-sized enterprises using conventional behavior-based scanners or intrusion detection systems because they’re often inaccurate.


How organizations can secure their AI code

Organizations also expose themselves to risks when developers download machine learning (ML) models or datasets from platforms like Hugging Face. “In spite of security checks on both ends, it may still happen that the model contains a backdoor that becomes active once the model is integrated,” says Alex Ștefănescu, open-source developer at the Organized Crime and Corruption Reporting Project (OCCRP). “This could ultimately lead to data being leaked from the company that used the malicious models.” ... Not all AI-based tools are coming from teams full of software engineers. “We see a lot of adoption being driven by data analysts, marketing teams, researchers, etc. within organizations,” Meyer says. These teams aren’t traditionally developing their own software but are increasingly writing simple tools that adopt AI libraries and models, so they’re often not aware of the risks involved. “This combination of shadow engineering with lower-than-average application security awareness can be a breeding ground for risk,” he adds. ... When it comes to securing enough resources to protect AI systems, some stakeholders might hesitate, viewing it as an optional expense rather than a critical investment. “AI adoption is a divisive topic in many organizations, with some leaders and teams being ‘all-in’ on adoption and some being strongly resistant,” Meyer says. 


AI-driven insights transform security preparedness and recovery

IT security teams everywhere are struggling to meet the scale of actions required to ensure IT operational risk remediation from continually evolving threats. Recovering digital operations after an incident requires a proactive system of IT observability, intelligence, and automation. Organizations should first unify visibility across their IT environments, so they can quickly identify and respond to incidents. Additionally, teams need to eliminate data silos to prevent monitoring overload and resolve issues. ... Unfortunately, many companies still lack the foundational elements needed for successful and secure AI adoption. Common challenges include fragmented or low-quality data disperse in multiple silos, lack of coordination, a shortage of specialized talent like data and AI engineers, and the company own culture resistant to change. Fostering a culture of security awareness starts with making security a visible and integral part of everyday operations. IT leaders should focus on equipping employees with actionable insights through tools that simplify complex security issues. Training programs, tailored to different roles, help ensure that teams understand specific threats relevant to their responsibilities. Providing real-time feedback, such as simulated scenarios, builds practical awareness.


AI Is Quietly Steering Your Decisions - Before You Make Them

Agentic AI here is a critical enabler. These systems analyze user data over various modalities, including text, voice and behavioral patterns to predict intentions and influence outcomes. They are more than a handy assistant helping you cross off a to-do list. OpenAI CEO Sam Altman called these agents "AI's killer function," comparing them to "super competent colleagues that know absolutely everything about my whole life - every email, every conversation I've ever had - but don't feel like an extension." And they are everywhere. Microsoft and Google spearheaded chatbot integration into everyday tools, with Microsoft embedding its Bing Chat and AI assistants into Office software and Google enhancing productivity tools such as Workspace with Gemini capabilities. The study cited the example of Meta, which has claimed to achieve human-level play in the game Diplomacy using their AI agent CICERO. The research team behind CICERO, it says, cautions against "the potential danger for conversational AI agents" that "may learn to nudge its conversational partner to achieve a particular objective." Apple's App Intents framework, it explained, has protocols to "predict actions someone might take in the future" and "to suggest the app intent to someone in the future using predictions you [the developer] provide."


Why digital brands investing in AI to replace humans will fail

Despite its strengths, AI cannot (yet) accurately replicate core human qualities such as emotional intelligence, critical thinking, and nuanced judgment. What it can do is automate time consuming, repetitive operations. Rather than attempting to replace human workers, forward-thinking organisations should encourage the power of human-AI collaboration. By approaching AI this way, brands can respond to customers digital problems faster, meaning employees can use the time gained to direct their efforts to complex problem-solving, strategic planning and customer relations. Those that adopt a hybrid approach, to find the optimal balance between AI and human insight, will be most successful. The collaboration between AI-powered tools and human intelligence creates a powerful combination that can strengthen performance, drive innovation, and help deliver a better overall customer experience. ... On the other hand, businesses that are looking to replace workers, and eventually rely solely on AI-generated operations, risk losing the genuine human touch. This loss of authenticity has the potential to alienate customers, leaving them to feel that their experiences with digital brands are insincere and mechanical. 


From devops to CTO: 5 things to start doing now

If you want to be recognized for promotions and greater responsibilities, the first place to start is in your areas of expertise and with your team, peers, and technology leaders. However, shift your focus from getting something done to a practice leadership mindset. Develop a practice or platform your team and colleagues want to use and demonstrate its benefits to the organization. ... One of the bigger challenges for engineers when taking on larger technical responsibilities is shifting their mindset from getting work done today to deciding what work to prioritize and influencing longer-term implementation decisions. Instead of developing immediate solutions, the path to CTO requires planning architecture, establishing governance, and influencing teams to adopt self-organizing standards. ... “If devops professionals want to be considered for the role of CTO, they need to take the time to master a wide range of skills,” says Alok Uniyal, SVP and head of IT process consulting practice at Infosys. “You cannot become a CTO without understanding areas such as enterprise architecture, core software engineering and operations, fostering tech innovation, the company’s business, and technology’s role in driving business value. Showing leadership that you understand all technology workstreams at a company as well as key tech trends and innovations in the industry is critical for CTO consideration.”


The Human Touch in Tech: Why Local IT Support Remains Essential

While AI can handle common issues, complex or unforeseen problems often require creative solutions and in-depth technical expertise. Call center agents, with limited access to resources — and often operating under strict protocols — may be unable to depart from standardized procedures, even when doing so might be beneficial. The collaborative, adaptable problem-solving approach of a skilled, experienced IT technician is often the key to resolving these intricate challenges. Many IT issues require physical intervention and hands-on troubleshooting. Remote support, though helpful, can't always address hardware problems, network configurations, or security breaches that require on-site assessment and repair. Local IT support companies offering on-site visits have a clear advantage in addressing these types of issues efficiently and effectively. ... Local providers often possess a wide range of skills and experience, allowing them to handle a broader spectrum of issues. Their ability to think creatively and collaboratively enables them to address complex problems that may stump call center agents or AI systems. Furthermore, their local presence allows for swift on-site responses to critical situations.


Six ways to reduce cloud database costs without sacrificing performance

Automate data archiving or deletion for unused or outdated records. Use lifecycle policies to move logs older than specific days to cheaper storage or delete them. TTL (Time to Live) is an easier way to perform such data lifecycle. TTL refers to a setting that defines the lifespan of a piece of data (e.g., a record or document) in the database. After the specified TTL expires, the data is automatically deleted or marked for deletion by the database. ... The advantage of consolidating multiple applications to one single database results in fewer instances, hence reducing costs for compute and storage, enabling efficient resource utilisation when workloads have similar usage patterns. The Implementation can follow schema-based isolation where separate schemas for each tenant can be implemented & row-level isolation where a tenant ID column can be used to segment data within tables One example is to host a SaaS platform for multiple customers on a single database instance with logical partitions. ... Creating copies of specific data items can enhance read performance by reducing costly operations. In an e-commerce store example, you’d typically have separate tables for customers, products, and orders. Retrieving one customer’s order history would involve a query that joins the order table with the customer table and product table.


AI, IoT, and cybersecurity are at the heart of our innovation: Sharat Sinha, Airtel Business

At Airtel Business, we understand that cybersecurity is a growing concern for Indian enterprises. With cyberattacks in India projected to reach one trillion per year by 2033, businesses need robust solutions to safeguard their digital assets. That’s where Airtel Secure Internet and Airtel Secure Digital Internet come in. Airtel Secure Internet, in collaboration with Fortinet, provides comprehensive end-to-end protection by integrating Fortinet’s advanced firewall with Airtel’s high-speed Internet Leased Line (ILL). This solution offers 24/7 monitoring, real-time threat detection, and automated mitigation, all powered by Airtel’s Security Operations Centre (SOC) and Fortinet’s SOAR platform. It ensures businesses are protected from a range of cyberthreats while optimising operational efficiency, without the need for large capital investments in security infrastructure. In addition, Airtel Secure Digital Internet, in partnership with Zscaler, uses Zero Trust Architecture (ZTA) to continuously validate user, device, and network interactions. Combining Zscaler’s cloud security with Security Service Edge (SSE) technology, this solution ensures secure cloud access, SSL inspection, and centralised policy enforcement, helping businesses reduce attack surfaces and simplify security management. 



Quote for the day:

"The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things." -- Ronald Reagan

Daily Tech Digest - January 19, 2025

Service as Software: How AI Agents Are Transforming SaaS

SaaS empowered users across industries by providing the tools and intelligence to make informed decisions. But it has always stopped short of execution. Lawyers, radiologists, tax consultants, and other service providers rely on SaaS to make decisions, but they remain responsible for the last-mile activity. Service as Software closes this gap. Agents powered by capable LLMs and integrated with existing APIs — and even SaaS platforms — don’t just inform users, they take action on their behalf. Instead of providing tools for human service providers, Service as Software directly delivers outcomes. This transformation is more than technological — it’s economic. ... Enterprises considering transitioning from SaaS to Service as Software often begin by examining which tasks would yield the most value from automation. These tasks are typically repetitive, time-sensitive, or error-prone when conducted manually. Introducing an intelligent agent that can monitor data streams, evaluate decision rules and initiate final actions may require augmenting existing infrastructure — for instance, adding webhooks, implementing new API endpoints, or integrating a rules engine.


Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged

Perhaps the most dangerous aspect of anthropomorphizing AI is how it masks the fundamental differences between human and machine intelligence. While some AI systems excel at specific types of reasoning and analytical tasks, the large language models (LLMs) that dominate today’s AI discourse — and that we focus on here — operate through sophisticated pattern recognition. These systems process vast amounts of data, identifying and learning statistical relationships between words, phrases, images and other inputs to predict what should come next in a sequence. When we say they “learn,” we’re describing a process of mathematical optimization that helps them make increasingly accurate predictions based on their training data. ... One critical area where anthropomorphizing creates risk is content generation and copyright compliance. When businesses view AI as capable of “learning” like humans, they might incorrectly assume that AI-generated content is automatically free from copyright concerns. ... One of the most concerning costs is the emotional toll of anthropomorphizing AI. We see increasing instances of people forming emotional attachments to AI chatbots, treating them as friends or confidants.


Building Secure Software - Integrating Security in Every Phase of the SDLC

A common problem in software development is that security related activities are left out or deferred until the final testing phase, which is too late in the SDLC after most of the critical design and implementation has been completed. Besides, the security checks performed during the testing phase can be superficial, limited to scanning and penetration testing, which might not reveal more complex security issues. By adopting shift left principle, teams are able to detect and fix security flaws early on, save money that would otherwise be spent on a costly rework, and have a better chance of avoiding delays going into production. Integrating security into SDLC should look like weaving rather than stacking. There is no “security phase,” but rather a set of best practices and tools that should be included within the existing phases of the SDLC. A Secure SDLC requires adding security review and testing at each software development stage, from design, to development, to deployment and beyond. From initial planning to deployment and maintenance, embedding security practices ensures the creation of robust and resilient software. 


Making AI greener starts with smarter data center design

There’s been a lot of talk about the off-grid energy investments of hyperscalers. But the energy efficiency of AI infrastructure also has a big role to play. Nokia provides networking connectivity inside and between data centers, as well as between end users and data center applications. Understanding this intricate web is important as it’s not just about making the processes inside a data center faster and more efficient. It’s about making the entire journey between somebody making an AI request—and getting back a response—quick, secure, and more energy efficient. ... Energy, performance, and cost considerations may prompt some cloud providers to build their data centers in remote locations with access to clean energy, passive cooling, and cheaper and more plentiful real estate. However, data sovereignty laws, security concerns, and the ultra-low latency requirements of industrial applications may see a move toward more distributed cloud computing, with AI workloads moving closer to the end user. This would likely lead to more regional, metropolitan, and edge data centers, with some businesses and organizations opting for on-site data centers for mission-critical functions.
We may, in fact, see both trends at the same time. 


Employees Enter Sensitive Data Into GenAI Prompts Far Too Often

"Utilizing AI for the sake of using AI is destined to fail," said Kris Bondi, CEO and co-founder of Mimoto, in an emailed statement to Dark Reading. "Even if it gets fully implemented, if it isn't serving an established need, it will lose support when budgets are eventually cut or reappropriated." Though Kowski believes that not incorporating GenAI is risky, success can still be achieved, he notes. "Success without AI is still achievable if a company has a compelling value proposition and strong business model, particularly in sectors like engineering, agriculture, healthcare, or local services where non-AI solutions often have greater impact," he said. If organizations do want to pursue incorporating GenAI tools but want to mitigate the high risks that come along with it, the researchers at Harmonic have recommendations on how to best approach this. The first is to move beyond "block strategies" and implement effective AI governance, including deploying systems to track input into GenAI tools in real time, identifying what plans are in use and ensuring that employees are using paid plans for their work and not plans that use inputted data to train systems, gaining full visibility over these tools, sensitive data classification, creating and enforcing workflows, and training employees on best practices and risks of responsible GenAI use.


What is Blue Ocean Strategy? 3 Key Ways to Build a Business in an Uncontested Market

One of the biggest surprises in tackling a neglected market segment is realizing that your future customers might not even know they need you. They may sense a vague discomfort or carry a subconscious worry, but they haven't articulated the problem in a way that translates into action. In my field, most people didn't fully appreciate how complex certain end-of-life tasks could become — until they found themselves in the middle of a crisis they never prepared for. Simply presenting a solution and hoping people will connect the dots doesn't work when the underlying problem is hidden or poorly understood. Education became my most potent tool. ... Building momentum in a market with no clear precedent means learning to paddle in still waters. I needed to constantly fine-tune the product based on authentic customer feedback, invest the time and effort to educate potential users so they could recognize the value of what I was offering, and craft a holistic experience that viewed their challenges from multiple angles. These three strategies became the bedrock of my approach to Blue Ocean markets. 


Secure AI? Dream on, says AI red team

The first step in an AI red teaming operation is to determine which vulnerabilities to target, they said. They suggest: “starting from potential downstream impacts, rather than attack strategies, makes it more likely that an operation will produce useful findings tied to real world risks. After these impacts have been identified, red teams can work backwards and outline the various paths that an adversary could take to achieve them.” ... The two, authors said, are distinct yet “both useful and can even be complimentary. In particular, benchmarks make it easy to compare the performance of multiple models on a common dataset. AI red teaming requires much more human effort but can discover novel categories of harm and probe for contextualized risks.” ... The bottom line here: RAI harms are more ambiguous than security vulnerabilities and it all has to do with “fundamental differences between AI systems and traditional software.” Most AI safety research, the authors noted, focus on adversarial users who deliberately break guardrails, when in truth, they maintained, benign users who accidentally generate harmful content are as or more important.


New AI Architectures Could Revolutionize Large Language Models

For context, transformer architecture, the technology which gave ChatGPT the 'T' in its name, is designed for sequence-to-sequence tasks such as language modeling, translation, and image processing. Transformers rely on “attention mechanisms,” or tools to understand how important a concept is depending on a context, to model dependencies between input tokens, enabling them to process data in parallel rather than sequentially like so-called recurrent neural networks—the dominant technology in AI before transformers appeared. This technology gave models context understanding and marked a before and after moment in AI development. ... Google Research's Titans architecture takes a different approach to improving AI adaptability. Instead of modifying how models process information, Titans focuses on changing how they store and access it. The architecture introduces a neural long-term memory module that learns to memorize at test time, similar to how human memory works. ... Overall, the era of AI companies bragging over the sheer size of their models may soon be a relic of the past. If this new generation of neural networks gains traction, then future models won’t need to rely on massive scales to achieve greater versatility and performance.


How to Leverage Network Segmentation for Hospitality Sector PCI SSF Compliance

Network segmentation is the process of dividing a computer network into isolated segments or subnetworks, with each segment protected by security controls like firewalls and access restrictions. Specifically, each segment is separated by firewalls or other security measures, effectively restricting traffic flow between segments. Thus, this isolation helps contain potential security breaches, hence preventing them from spreading across the entire network. ... In the context of PCI SSF compliance, network segmentation can help hospitality businesses protect sensitive payment card data. It does so by limiting access to this data. By isolating the Cardholder Data Environment (CDE) from the rest of the network, organizations can reduce the scope of PCI SSF compliance. This also enhances their overall security posture. ... By isolating sensitive data, network segmentation reduces the risk of unauthorized access and data breaches. It creates multiple layers of defense, making it more difficult for attackers to reach critical systems. This approach also limits the lateral movement of threats, ensuring that a compromised system does not jeopardize the entire network.


Overcoming Key Challenges in an AI-Centric Future

Much has been made of AI and its potential dangers in the hands of attackers. It’s true—with the help of AI, launching an attack has never been easier, and it’s likely just a matter of time until we witness a significant AI-driven breach. That said, all is not lost. AI-specific security controls are already beginning to emerge, and as AI becomes more commonplace, newer and more advanced solutions will continue to emerge in the near future. ... Regulations almost always lag behind innovation, and AI is no exception. While a handful of AI regulations have begun to emerge around the world, most organizations are currently taking matters into their own hands by implementing dedicated AI polices to evaluate and control the AI services they use. Right now, those initiatives are focused primarily on maintaining data privacy and preventing AI from making critical errors. These AI safety standards will continue to evolve and will likely be integrated into existing security frameworks, including those put out by independent advisory bodies. Regulators will almost certainly maintain a strong focus on ethical considerations, creating guidelines that help define acceptable and responsible use cases for AI capabilities.



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki