Showing posts with label BGP. Show all posts
Showing posts with label BGP. Show all posts

Daily Tech Digest - May 24, 2025


Quote for the day:

“In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it.” -- Jane Smiley



DanaBot botnet disrupted, QakBot leader indicted

Operation Endgame relies on help from a number of private sector cybersecurity companies (Sekoia, Zscaler, Crowdstrike, Proofpoint, Fox-IT, ESET, and others), non-profits such as Shadowserver and white-hat groups like Cryptolaemus. “The takedown of DanaBot represents a significant blow not just to an eCrime operation but to a cyber capability that has appeared to align Russian government interests. The case (…) highlights why we must view certain Russian eCrime groups through a political lens — as extensions of state power rather than mere criminal enterprises,” Crowdstrike commented the DanaBot disruption. ... “We’ve previously seen disruptions have significant impacts on the threat landscape. For example, after last year’s Operation Endgame disruption, the initial access malware associated with the disruption as well as actors who used the malware largely disappeared from the email threat landscape,” Selena Larson, Staff Threat Researcher at Proofpoint, told Help Net Security. “Cybercriminal disruptions and law enforcement actions not only impair malware functionality and use but also impose cost to threat actors by forcing them to change their tactics, cause mistrust in the criminal ecosystem, and potentially make criminals think about finding a different career.”


AI in Cybersecurity: Protecting Against Evolving Digital Threats

Beyond detecting threats, AI excels at automating repetitive security tasks. Tasks like patching vulnerabilities, filtering malicious traffic, and conducting compliance checks can be time-consuming. AI’s speed and precision in handling these tasks free up cybersecurity professionals to focus on complex problem-solving. ... The integration of AI into cybersecurity raises ethical questions that must be addressed. Privacy concerns are at the forefront, as AI systems often rely on extensive data collection. This creates potential risks for mishandling or misuse of sensitive information. Additionally, AI’s capabilities for surveillance can lead to overreach. Governments and corporations may deploy AI tools for monitoring activities under the guise of security, potentially infringing on individual rights. There is also the risk of malicious actors repurposing legitimate AI tools for nefarious purposes. Clear guidelines and robust governance are crucial to ensuring responsible AI deployment in cybersecurity. ... The growing role of AI in cybersecurity necessitates strong regulatory frameworks. Governments and organizations are working to establish policies that address AI’s ethical and operational challenges in this field. Transparency in AI decision-making processes and standardized best practices are among the key priorities.


Open MPIC project defends against BGP attacks on certificate validation

MPIC is a method to enhance the security of certificate issuance by validating domain ownership and CA checks from multiple network vantage points. It helps prevent BGP hijacking by ensuring that validation checks return consistent results from different geographical locations. The goal is to make it more difficult for threat actors to compromise certificate issuance by redirecting internet routes. ... Open MPIC operates through a parallel validation architecture that maximizes efficiency while maintaining security. When a domain validation check is initiated, the framework simultaneously queries all configured perspectives and collects their results. “If you have 10 perspectives, then it basically asks all 10 perspectives at the same time, and then it will collect the results and determine the quorum and give you a thumbs up or thumbs down,” Sharkov said. This approach introduces some unavoidable latency, but the implementation minimizes performance impact through parallelization. Sharkov noted that the latency is still just a fraction of a second. ... The open source nature of the project addresses a significant challenge for the industry. While large certificate authorities often have the resources to build their own solutions, many smaller CAs would struggle with the technical and infrastructure requirements of multi-perspective validation.


How to Close the Gap Between Potential and Reality in Tech Implementation

First, there has to be alignment between the business and tech sides. So, I’ve seen in many institutions that there’s not complete alignment between both. And where they could be starting, they sometimes separate and they go in opposite directions. Because at the end of the day, let’s face it, we’re all looking at how it will help ourselves. Secondly, it’s just the planning, ensuring that you check all the boxes and have a strong implementation plan. One recent customer who just joined Backbase: One of the things I loved about what they brought to the kickoff call was what success looked like to them for implementation. So, they had the work stream, whether the core integration, the call center, their data strategy, or their security requirements. Then, they had the leader who was the overall owner and then they had the other owners of each work stream. Then, they defined success criteria with the KPIs associated with those success criteria. ... Many folks forget that they are, most of the time, still running on a legacy platform. So, for me, success is when they decommission that legacy platform and a hundred percent of their members or customers are on Backbase. That’s one of the very important internal KPIs.


How AIOps sharpens cybersecurity posture in the age of cyber threats

The good news is, AIOps platforms are built to scale with complexity, adapting to new environments, users, and risks as they develop. And organizations can feel reassured that their digital vulnerabilities are safeguarded for the long term. For example, modern methods of attack, such as hyperjacking, can be identified and mitigated with AIOps. This form of attack in cloud security is where a threat actor gains control of the hypervisor – the software that manages virtual machines on a physical server. It allows them to then take over the virtual machines running on that hypervisor. What makes hyperjacking especially dangerous is that it operates beneath the guest operating systems, effectively evading traditional monitoring tools that rely on visibility within the virtual machines. As a result, systems lacking deep observability are the most vulnerable. This makes the advanced observability capabilities of AIOps essential for detecting and responding to such stealthy threats. Naturally, this evolving scope of digital malice also requires compliance rules to be frequently reviewed. When correctly configured, AIOps can support organizations by interpreting the latest guidelines and swiftly identifying the data deviations that would otherwise incur penalties.


Johnson & Johnson Taps AI to Advance Surgery, Drug Discovery

J&J's Medical Engagement AI redefines care delivery, identifying 75,000 U.S. patients with unmet needs across seven disease areas, including oncology. Its analytics engine processes electronic health records and clinical guidelines to highlight patients missing optimal treatments. A New York oncologist, using J&J's insights, adjusted treatment for 20 patients in 2024, improving the chances of survival. The platform engages over 5,000 providers, empowering medical science liaisons with real-time data. It helps the AI innovation team turn overwhelming data into an advantage. Transparent data practices and a focus on patient outcomes align with J&J's ethical standards, making this a model that bridges tech and care. ... J&J's AI strategy rests on five ethical pillars, including fairness, privacy, security, responsibility and transparency. It aims to deliver AI solutions that benefit all stakeholders equitably. The stakeholders and users understand the methods through which datasets are collected and how external influences, such as biases, may affect them. Bias is mitigated through annual data audits, privacy is upheld with encrypted storage and consent protocols, and on top of it is AI-driven cybersecurity monitoring. A training program, launched in 2024, equipped 10,000 employees to handle sensitive data. 


Surveillance tech outgrows face ID

Many oppose facial recognition technology because it jeopardizes privacy, civil liberties, and personal security. It enables constant surveillance and raises the specter of a dystopian future in which people feel afraid to exercise free speech.Another issue is that one’s face can’t be changed like a password can, so if face-recognition data is stolen or sold on the Dark Web, there’s little anyone can do about the resulting identity theft and other harms. .... You can be identified by your gait (how you walk). And surveillance cameras now use AI-powered video analytics to track behavior, not just faces. They can follow you based on your clothing, the bag you carry, and your movement patterns, stitching together your path across a city or a stadium without ever needing a clear shot of your face. The truth is that face recognition is just the most visible part of a much larger system of surveillance. When public concern about face recognition causes bans or restrictions, governments, companies, and other organizations simply circumvent that concern by deploying other technologies from a large and growing menu of options. Whether we’re IT professionals, law enforcement technologists, security specialists, or privacy advocates, it’s important to incorporate the new identification technologies into our thinking, and face the new reality that face recognition is just one technology among many.


How Ready Is NTN To Go To Scale?

Non-Terrestrial Networks (NTNs) represent a pivotal advancement in global communications, designed to extend connectivity far beyond the limits of ground-based infrastructure. By leveraging spaceborne and airborne assets—such as Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and Geostationary (GEO) satellites, as well as High-Altitude Platform Stations (HAPS) and UAVs—NTNs enable seamless coverage in regions previously considered unreachable. Whether traversing remote deserts, deep oceans, or mountainous terrain, NTNs provide reliable, scalable connectivity where traditional terrestrial networks fall short or are economically unviable. This paradigm shift is not merely about extending signal reach; it’s about enabling entirely new categories of applications and industries to thrive in real time. ... A core feature of NTNs is their use of varied orbital altitudes, each offering distinct performance characteristics. Low Earth Orbit (LEO) satellites (altitudes of 500–2,000 km) are known for their low latency (20–50 ms) and are ideal for real-time services. Medium Earth Orbit (MEO) systems (2,000–35,000 km) strike a balance between coverage and latency and are often used in navigation and communications. Geostationary Orbit (GEO) satellites, positioned at ~35,786 km, provide wide-area coverage from a fixed position relative to Earth’s rotation—particularly useful for broadcast and constant-area monitoring. 


Enterprises are wasting the cloud’s potential

One major key to achieving success with cloud computing is training and educating employees. Although the adoption of cloud technology signifies a significant change, numerous companies overlook the importance of equipping their staff with the technical expertise and strategic acumen to capitalize on its potential benefits. IT teams that lack expertise in cloud services may use cloud resources inefficiently or ineffectively. Business leaders who are unfamiliar with cloud tools often struggle to leverage data-driven insights that could drive innovation. Employees relying on cloud-based applications might not fully utilize all their functionality due to insufficient training. These skill gaps lead to dissatisfaction with cloud services, and the company doesn’t benefit from its investments in cloud infrastructure. ... The cloud is a tool for transforming operations rather than just another piece of IT equipment. Companies can refine their approach to the cloud by establishing effective governance structures and providing employees with training on the optimal utilization of cloud technology. Once they engage architects and synchronize cloud efforts with business objectives, most companies will see tangible results: cost savings, system efficiency, and increased innovation.


The battle to AI-enable the web: NLweb and what enterprises need to know

NLWeb enables websites to easily add AI-powered conversational interfaces, effectively turning any website into an AI app where users can query content using natural language. NLWeb isn’t necessarily about competing with other protocols; rather, it builds on top of them. The new protocol uses existing structured data formats like RSS, and each NLWeb instance functions as an MCP server. “The idea behind NLWeb is it is a way for anyone who has a website or an API already to very easily make their website or their API an agentic application,” Microsoft CTO Kevin Scott said during his Build 2025 keynote. “You really can think about it a little bit like HTML for the agentic web.” ... “NLWeb leverages the best practices and standards developed over the past decade on the open web and makes them available to LLMs,” Odewahn told VentureBeat. “Companies have long spent time optimizing this kind of metadata for SEO and other marketing purposes, but now they can take advantage of this wealth of data to make their own internal AI smarter and more capable with NLWeb.” ... “NLWeb provides a great way to open this information to your internal LLMs so that you don’t have to go hunting and pecking to find it,” Odewahn said. “As a publisher, you can add your own metadata using schema.org standard and use NLWeb internally as an MCP server to make it available for internal use.”

Daily Tech Digest - October 03, 2024

Why Staging Is a Bottleneck for Microservice Testing

Multiple teams often wait for their turn to test features in staging. This creates bottlenecks. The pressure on teams to share resources can severely delay releases, as they fight for access to the staging environment. Developers who attempt to spin up the entire stack on their local machines for testing run into similar issues. As distributed systems engineer Cindy Sridharan notes, “I now believe trying to spin up the full stack on developer laptops is fundamentally the wrong mindset to begin with, be it at startups or at bigger companies.” The complexities of microservices make it impractical to replicate entire environments locally, just as it’s difficult to maintain shared staging environments at scale. ... From a release process perspective, the delays caused by a fragile staging environment lead to slower shipping of features and patches. When teams spend more time fixing staging issues than building new features, product development slows down. In fast-moving industries, this can be a major competitive disadvantage. If your release process is painful, you ship less often, and the cost of mistakes in production is higher. 


Misconfiguration Madness: Thwarting Common Vulnerabilities in the Financial Sector

Financial institutions require legions of skilled security personnel in order to overcome the many challenges facing their industry. Developers are an especially important part of that elite cadre of defenders for a variety of reasons. First and foremost, security-aware developers can write secure code for new applications, which can thwart attackers by denying them a foothold in the first place. If there are no vulnerabilities to exploit, an attacker won't be able to operate, at least not very easily. Developers with the right training can also help to support both modern and legacy applications by examining the existing code that makes up some of the primary vectors used to attack financial institutions. That includes cloud misconfigurations, lax API security, and the many legacy bugs found in applications written in COBOL and other aging computer languages. However, the task of nurturing and maintaining security-aware developers in the financial sector won’t happen on its own. It requires precise, immersive training programs that are highly customizable and matched to the specific complex environment that a financial services institution is using.


3 things to get right with data management for gen AI projects

The first is a series of processes — collecting, filtering, and categorizing data — that may take several months for KM or RAG models. Structured data is relatively easy, but the unstructured data, while much more difficult to categorize, is the most valuable. “You need to know what the data is, because it’s only after you define it and put it in a taxonomy that you can do anything with it,” says Shannon. ...  “We started with generic AI usage guidelines, just to make sure we had some guardrails around our experiments,” she says. “We’ve been doing data governance for a long time, but when you start talking about automated data pipelines, it quickly becomes clear you need to rethink the older models of data governance that were built more around structured data.” Compliance is another important area of focus. As a global enterprise thinking about scaling some of their AI projects, Harvard keeps an eye on evolving regulatory environments in different parts of the world. It has an active working group dedicated to following and understanding the EU AI Act, and before their use cases go into production, they run through a process to make sure all compliance obligations are satisfied.


Fundamentals of Data Preparation

Data preparation is intended to improve the quality of the information that ML and other information systems use as the foundation of their analyses and predictions. Higher-quality data leads to greater accuracy in the analyses the systems generate in support of business decision-makers. This is the textbook explanation of the link between data preparation and business outcomes, but in practice, the connection is less linear. ... Careful data preparation adds value to the data itself, as well as to the information systems that rely on the data. It goes beyond checking for accuracy and relevance and removing errors and extraneous elements. The data-prep stage gives organizations the opportunity to supplement the information by adding geolocation, sentiment analysis, topic modeling, and other aspects. Building an effective data preparation pipeline begins long before any data has been collected. As with most projects, the preparation starts at the end: identifying the organization’s goals and objectives, and determining the data and tools required to achieve those goals. ... Appropriate data preparation is the key to the successful development and implementation of AI systems in large part because AI amplifies existing data quality problems. 


How to Rein in Cybersecurity Tool Sprawl

Security tool sprawl happens for many different reasons. Adding new tools and new vendors as new problems arise without evaluating the tools already in place is often how sprawl starts. The sheer glut of tools available in the market can make it easy for security teams to embrace the latest and greatest solutions. “[CISOs] look for the newest, the latest and the greatest. They're the first adopter type,” says Reiter. A lack of communication between departments and teams in an enterprise can also contribute. “There's the challenge of teams not necessarily knowing their day-to-day functions of other team,” says Mar-Tang. Security leaders can start to wrap their heads around the problem of sprawl by running an audit of the security tools in place. Which teams use which tools? How often are the tools used? How many vendors supply those tools? What are the lengths of the vendor contracts? Breaking down communication barriers within an enterprise will be a necessary part of answering questions like these. “Talk to the … security and IT risk side of your house, the people who clean up the mess. You have an advocate and a partner to be able to find out where you have holes and where you have sprawl,” Kris Bondi, CEO and co-founder at endpoint security company Mimoto, recommends.


The Promise and Perils of Generative AI in Software Testing

The journey from human automation tester to AI test automation engineer is transformative. Traditionally, transitioning to test automation required significant time and resources, including learning to code and understanding automation frameworks. AI removes these barriers and accelerates development cycles, dramatically reducing time-to-market and improving accuracy, all while decreasing the level of admin tasks for software testers. AI-powered tools can interpret test scenarios written in plain language, automatically generate the necessary code for test automation, and execute tests across various platforms and languages. This dramatically reduces the enablement time, allowing QA professionals to focus on strategic tasks instead of coding complexities. ... As GenAI becomes increasingly integrated into software development life cycles, understanding its capabilities and limitations is paramount. By effectively managing these dynamics, development teams can leverage GenAI’s potential to enhance their testing practices while ensuring the integrity of their software products.


Near-'perfctl' Fileless Malware Targets Millions of Linux Servers

The malware looks for vulnerabilities and misconfigurations to exploit in order to gain initial access. To date, Aqua Nautilus reports, the malware has likely targeted millions of Linux servers, and compromised thousands. Any Linux server connected to the Internet is in its sights, so any server that hasn't already encountered perfctl is at risk. ... By tracking its infections, researchers identified three Web servers belonging to the threat actor: two that were previously compromised in prior attacks, and a third likely set up and owned by the threat actor. One of the compromised servers was used as the primary base for malware deployment. ... To further hide its presence and malicious activities from security software and researcher scrutiny, it deploys a few Linux utilities repurposed into user-level rootkits, as well as one kernel-level rootkit. The kernel rootkit is especially powerful, hooking into various system functions to modify their functionality, effectively manipulating network traffic, undermining Pluggable Authentication Modules (PAM), establishing persistence even after primary payloads are detected and removed, or stealthily exfiltrating data. 


Three hard truths hindering cloud-native detection and response

Most SOC teams either lack the proper tooling or have so many cloud security point tools that the management burden is untenable. Cloud attacks happen way too fast for SOC teams to flip from one dashboard to another to determine if an application anomaly has implications at the infrastructure level. Given the interconnectedness of cloud environments and the accelerated pace at which cloud attacks unfold, if SOC teams can’t see everything in one place, they’ll never be able to connect the dots in time to respond. More importantly, because everything in the cloud happens at warp speed, we humans need to act faster, which can be nerve wracking and increase the chance of accidentally breaking something. While the latter is a legitimate concern, if we want to stay ahead of our adversaries, we need to get comfortable with the accelerated pace of the cloud. While there are no quick fixes to these problems, the situation is far from hopeless. Cloud security teams are getting smarter and more experienced, and cloud security toolsets are maturing in lockstep with cloud adoption. And I, like many in the security community, am optimistic that AI can help deal with some of these challenges.


How to Fight ‘Technostress’ at Work

Digital stressors don’t occur in isolation, according to the researchers, which necessitates a multifaceted approach. “To address the problem, you can’t just address the overload and invasion,” Thatcher said. “You have to be more strategic.” “Let’s say I’m a manager, and I implement a policy that says no email on weekends because everybody’s stressed out,” Thatcher said. “But everyone stays stressed out. That’s because I may have gotten rid of techno-invasion—that feeling that work is intruding on my life—but on Monday, when I open my email, I still feel really overloaded because there are 400 emails.” It’s crucial for managers to assess the various digital stressors affecting their employees and then target them as a combination, according to the researchers. That means to address the above problem, Thatcher said, “you can’t just address invasion. You can’t just address overload. You have to address them together,” he said. ... Another tool for managers is empowering employees, according to the study. “As a manager, it may feel really dangerous to say, ‘You can structure when and where and how you do work.’ 


Fix for BGP routing insecurity ‘plagued by software vulnerabilities’ of its own, researchers find

Under BGP, there is no way to authenticate routing changes. The arrival of RPIK just over a decade ago was intended to fix that, using a digital record called a Route Origin Authorization (ROA) that identifies an ISP as having authority over specific IP infrastructure. Route origin validation (ROV) is the process a router undergoes to check that an advertised route is authorized by the correct ROA certificate. In principle, this makes it impossible for a rogue router to maliciously claim a route it does not have any right to. RPKI is the public key infrastructure that glues this all together, security-wise. The catch is that, for this system to work, RPIK needs a lot more ISPs to adopt it, something which until recently has happened only very slowly. ... “Since all popular RPKI software implementations are open source and accept code contributions by the community, the threat of intentional backdoors is substantial in the context of RPKI,” they explained. A software supply chain that creates such vital software enabling internet routing should be subject to a greater degree of testing and validation, they argue.



Quote for the day:

"You may have to fight a battle more than once to win it." -- Margaret Thatcher

Daily Tech Digest - September 04, 2024

What is HTTP/3? The next-generation web protocol

HTTPS will still be used as a mechanism for establishing secure connections, but traffic will be encrypted at the HTTP/3 level. Another way to say it is that TLS will be integrated into the network protocol instead of working alongside it. So, encryption will be moved into the transport layer and out of the app layer. This means more security by default—even the headers in HTTP/3 are encrypted—but there is a corresponding cost in CPU load. Overall, the idea is that communication will be faster due to improvements in how encryption is negotiated, and it will be simpler because it will be built-in at a lower level, avoiding the problems that arise from a diversity of implementations. ... In TCP, that continuity isn’t possible because the protocol only understands the IP address and port number. If either of those changes—as when you walk from one network to another while holding a mobile device—an entirely new connection must be established. This reconnection leads to a predictable performance degradation. The QUIC protocol introduces connection IDs or CIDs. For security, these are actually CID sets negotiated by the server and client. 


6 things hackers know that they don’t want security pros to know that they know

It’s not a coincidence that many attacks happen at the most challenging of times. Hackers really do increase their attacks on weekends and holidays when security teams are lean. And they’re more likely to strike right before lunchtime and end-of-day, when workers are rushing and consequently less attentive to red flags indicating a phishing attack or fraudulent activity. “Hackers typically deploy their attacks during those times because they’re less likely to be noticed,” says Melissa DeOrio, global threat intelligence lead at S-RM, a global intelligence and cybersecurity consultancy. ... Threat actors actively engage in open-source intelligence (OSINT) gathering, looking for information they can use to devise attacks, Carruthers says. It’s not surprising that hackers look for news about transformative events such as big layoffs, mergers and the like, she says. But CISOs, their teams and other executives may be surprised to learn that hackers also look for news about seemingly innocuous events such as technology implementations, new partnerships, hiring sprees, and executive schedules that could reveal when they’re out of the office.


Take the ‘Shift Left’ Approach a Step Further by ‘Starting Left’

This makes it vital to guarantee code quality and security from the start so that nothing slips through the cracks. Shift left accounts for this. It minimizes risks of bugs and vulnerabilities by introducing code testing and analysis earlier in the SLDC, catching problems before they mount and become trickier to solve or even find. Advancing testing activities earlier puts DevOps teams in a position to deliver superior-quality software to customers with greater frequency. As a practice, “shift left” requires a lot more vigilance in today’s security landscape. But most development teams don’t have the mental (or physical) bandwidth to do it properly — even though it should be an intrinsic part of code development strategy. In fact, the Linux Foundation revealed in a study recently that almost one-third of developers aren’t familiar with secure software development practices. “Shifting left” — performing analysis and code reviews earlier in the development process — is a popular mindset for creating better software. What the mindset should be, though, is to “start left,” not just impose the burden later on in the SDLC for developers. ... This mindset of “start left” focuses not only on an approach that values testing early and often, but also on using the best tools to do so. 


ONCD Unveils BGP Security Road Map Amid Rising Threats

The guidance comes amid an intensified threat landscape for BGP, which serves as the backbone of global internet traffic routing. BGP is a foundational yet vulnerable protocol, developed at a time when many of today's cybersecurity risks did not exist. Coker said the ONCD is committed to covering at least 60% of the federal government's IP space by registration service agreements "by the end of this calendar year." His office recently led an effort to develop a federal RSA template that federal agencies can use to facilitate their adoption of Resource Public Key Infrastructure, which can be used to mitigate BGP vulnerabilities. ... The ONCD report underscores how BGP "does not provide adequate security and resilience features" and lacks critical security capabilities, including the ability to validate the authority of remote networks to originate route announcements and to ensure the authenticity and integrity of routing information. The guidance tasks network operators with developing and periodically updating cybersecurity risk management plans that explicitly address internet routing security and resilience. It also instructs operators to identify all information systems and services internal to the organization that require internet access and assess the criticality of maintaining those routes for each address.


Efficient DevSecOps Workflows With a Little Help From AI

When it comes to software development, AI offers lots of possibilities to enhance workflows at every stage—from splitting teams into specialized roles such as development, operations, and security to facilitating typical steps like planning, managing, coding, testing, documentation, and review. AI-powered code suggestions and generation capabilities can automate tasks like autocompletion and identification of missing dependencies, making coding more efficient. Additionally, AI can provide code explanations, summarizing algorithms, suggesting performance improvements, and refactoring long code into object-oriented patterns or different languages. ... Instead of manually sifting through job logs, AI can analyze them and provide actionable insights, even suggesting fixes. By refining prompts and engaging in conversations with the AI, developers can quickly diagnose and resolve issues, even receiving tips for optimization. Security is crucial, so sensitive data like passwords and credentials must be filtered before analysis. A well-crafted prompt can instruct the AI to explain the root cause in a way any software engineer can understand, accelerating troubleshooting. This approach can significantly improve developer efficiency.


PricewaterhouseCoopers’ new CAIO – workers need to know their role with AI

“AI is becoming a natural part of everything we make and do. We’re moving past the AI exploration cycle, where managing AI is no longer just about tech, it is about helping companies solve big, important and meaningful problems that also drive a lot of economic value. “But the only way we can get there is by bringing AI into an organization’s business strategy, capability systems, products and services, ways of working and through your people. AI is more than just a tool — it can be viewed as a member of the team, embedding into the end-to-end value chain. The more AI becomes naturally embedded and intrinsic to an organization, the more it will help both the workforce and business be more productive and deliver better value. “In addition, we will see new products and services that are fully AI-powered come into the market — and those are going to be key drivers of revenue and growth.” ... You need to consider the bigger picture, understanding how AI is becoming integrated in all aspects of your organization. That means having your RAI leader working closely with your company’s CAIO (or equivalent) to understand changes in your operating model, business processes, products and services.


What Is Active Metadata and Why Does It Matter?

Active metadata’s ability to update automatically whenever the data it describes changes now extends beyond the data profile itself to enhance the management of data access, classification, and quality. Passive metadata’s static nature limits its use to data discovery, but the dynamic nature of active metadata delivers real-time insights into the data’s lineage to help automate data governance: Get a 360-degree view of data - Active metadata’s ability to auto-update ensures that metadata delivers complete and up-to-date descriptions of the data’s lineage, context, and quality. Companies can tell at a glance whether the data is being used effectively, appropriately, and in compliance with applicable regulations. Monitor data quality in real time - Automatic metadata updates improve data quality management by providing up-to-the-minute metrics on data completeness, accuracy, and consistency. This allows organizations to identify and respond to potential data problems before they affect the business. Patch potential governance holes - Active metadata allows data governance rules to be enforced automatically to safeguard access to the data, ensure it’s appropriately classified, and confirm it meets all data retention requirements. 


How to Get IT and Security Teams to Work Together Effectively

Successful collaboration requires a sense of shared mission, Preuss says. Transparency is crucial. "Leverage technology and automation to effectively share information and challenges across both teams," she advises. Building and practicing trust and communication in an environment that's outside the norm is also essential. One way to do so is by conducting joint business resilience drills. "Whether a cyber war game or an environmental crisis [exercise], resilience drills are one way to test the collaboration between teams before an event occurs." ... When it comes to cross-team collaboration, Scott says it's important for members to understand their communication style as well as the communication styles of the people they work with. "At Immuta, we do this through a DiSC assessment, which each employee is invited to complete upon joining the company." To build an overall sense of cooperation and teamwork, Jeff Orr, director of research, digital technology at technology research and advisory firm ISG, suggests launching an exercise simulation in which both teams are required to collaborate in order to succeed. 


Protecting national interests: Balancing cybersecurity and operational realities

A significant challenge we face today is safeguarding the information space against misinformation, disinformation, manipulation and deceptive content. Whether this is at the behest of nation-states, or their supporters, it can be immensely destabilising and disruptive. We must find a way to tackle this challenge, but this should not just focus on the responsibilities held by social media platforms, but also on how we can detect targeted misinformation, counter those narratives and block the sources. Technology companies have a key role in taking down content that is obviously malicious, but we need the processes to respond in hours, rather than days and weeks. More generally, infrastructure used to launch attacks can be spun up more quickly than ever and attacks manifest at speed. This requires the government to work more closely with major technology and telecommunication providers so we can block and counter these threats – and that demands information sharing mechanisms and legal frameworks which enable this. Investigating and countering modern transnational cybercrime demands very different approaches, and of course AI will undoubtedly play a big part in this, but sadly both in attack and defence.


How leading CIOs cultivate business-centric IT

With digital strategy and technology as the brains behind most business functions and operating models, IT organizations are determined to inject more business-centricity into their employee DNA. IT leaders have been burnishing their business acumen and embracing a non-technical remit for some time. Now, there’s a growing desire to infuse that mentality throughout the greater IT organization, stretching beyond basic business-IT alignment to creating a collaborative force hyper-fixated on channeling innovation to advance enterprise business goals. “IT is no longer the group in the rear with the gear,” says Sabina Ewing, senior vice president of business and technology services and CIO at Abbott Laboratories. ... While those with robust experience and expertise in highly technical areas such as cloud architecture or cybersecurity are still highly coveted, IT organizations like Duke Health, ServiceNow, and others are also seeking a very different type of persona. Zoetis, a leading animal health care company, casts a wider net when seeking tech and digital talent, focusing on those who are collaborative, passionate about making a difference, and adaptable to change. Candidates should also have a strong understanding of technology application, says CIO Keith Sarbaugh.



Quote for the day:

''When someone tells me no, it doesn't mean I can't do it, it simply means I can't do it with them.'' -- Karen E. Quinones Miller