Daily Tech Digest - November 26, 2024

Just what the heck does an ‘AI PC’ do?

As the PC market moves to AI PCs, x86 processor dominance will lessen over time, especially in the consumer AI laptop market, as Arm-based AI devices grab more share from Windows x86 AI and non-AI laptops, according to Atwal. “However, in 2025, Windows x86-based AI laptops will lead the business segment,” Atwal said. ... “We see AI-enabled PCs evolving to provide more personalized, adaptive experiences that are tailored to each user’s needs,” Butler said. “The rise of generative AI was a pivotal moment, yet reliance on cloud processing raises concerns around data privacy.” Each component of a PC plays a unique role in making AI tasks efficient, but the NPU is key for accelerating AI computations with minimal power consumption, according to Butler. In general, he said, AI PCs assist in or handle routine tasks to be more efficient and intuitive for users without the need to access an external website or service. ... AI PCs can also boost productivity by handling routine tasks such as scheduling and organizing emails, and by enhancing collaboration with real-time translation and transcription features, according to Butler. 


Humanity Protocol: ‘We’re building a full credential ecosystem’

Distinguishing between humans and machines online has become more important than ever. Over the past years, the digital world has seen a proliferation of AI-fueled deepfake impersonations, bots and Sybil attacks, in which a single entity creates many false identities to gain influence. An increasing number of companies are trying to come up with solutions relying on blockchain technology. One of the more well-known projects is World Network, previously known as Worldcoin, which scans irises to confirm their users are human. But the space is seeing more and more competitors relying on biometrics to prove people are real – including Humanity Protocol. “There are definitely a bunch of companies that are trying to solve the whole Proof of Personhood problem,” the company’s founder Terence Kwok told Biometric Update in an interview earlier this month. “We’re lucky to be one of the few that have started launching, building a user base and joined the market.” The company launched a testnet in October, allowing users and developers to get their first taste of the platform and receive some free cryptocurrency. The project has so far signed up over a million people – moving quickly to catch up with World Network which currently has 15 million users, including 7 million verified through its Orb iris-scanning technology.


The way we measure progress in AI is terrible

Benchmark creators often don’t make the questions and answers in their data set publicly available either. If they did, companies could just train their model on the benchmark; it would be like letting a student see the questions and answers on a test before taking it. But that makes them hard to evaluate. Another issue is that benchmarks are frequently “saturated,” which means all the problems have been pretty much been solved. For example, let’s say there’s a test with simple math problems on it. The first generation of an AI model gets a 20% on the test, failing. The second generation of the model gets 90% and the third generation gets 93%. An outsider may look at these results and determine that AI progress has slowed down, but another interpretation could just be that the benchmark got solved and is no longer that great a measure of progress. It fails to capture the difference in ability between the second and third generations of a model. One of the goals of the research was to define a list of criteria that make a good benchmark. “It’s definitely an important problem to discuss the quality of the benchmarks, what we want from them, what we need from them,” says Ivanova. “The issue is that there isn’t one good standard to define benchmarks. This paper is an attempt to provide a set of evaluation criteria. That’s very useful.”


Governance Considerations and Pitfalls When Implementing GenAI

Many large organizations are still in the process of establishing robust information governance frameworks for their current environments. Now, they must also address questions about their readiness to manage the impact of Copilot1 and similar generative AI tools. These questions include whether they can uphold appropriate access, use, and management across their IT infrastructure. Additionally, organizations should assess whether new artifacts are being created that could introduce unforeseen regulatory risk. ... With Copilot, anything a user has permission to access may surface as part of a response to a query or prompt. Without Copilot, when users are over-permissioned and have access to documents that they should not, they would only uncover the document if actively searching for it. Therefore, excess permissions and failure to limit access to certain materials can potentially expose information to far more employees than intended. To manage this, organizations must be diligent in defining controls and thoroughly understand the range of materials that Copilot users can access at different permission levels. Notably, when Copilot is turned on for a user, every application within Microsoft 365 that has a Copilot element will have AI activated. 


Next-Gen Networking: Exploring the Utility of Smart Routers in Data Centers

In cases where smart routers offer automated network management capabilities, they usually do so based on software that provides features like the ability to reroute packets to help balance network load or discover new devices automatically when they join the network. In this sense, smart routers don’t really do anything all that new; the sorts of capabilities just mentioned have long been a standard part of network management software. The only differentiator for smart routers, perhaps, is that these devices come bundled with software that enables them to help manage networks automatically, instead of requiring additional network management tools for that purpose. In addition, there seems to be a focus in smart router land on the notion of hands-off network management. Instead of requiring admins to configure networking policies and apply them manually, smart routers promise in many cases to manage your networks for you. It's essentially an example of what you might categorize as NoOps. It’s worth noting, too, that in more than a few cases, smart router vendors are slapping the “AI” label on their devices. But like many vendors who profess to be selling AI-powered solutions today, they're using the term loosely to refer to any type of software that uses data analytics in some sort of way.


Digitising India with AI-based photogrammetry software

Photogrammetry is the capturing of measurements from photographs shot by drones, satellites, or aerial photography and generating maps, and 3D models even up to including a Geographic Information System (GIS). Traditionally, photogrammetric processing involved collecting a huge amount of data through manual efforts with post-processing taken care of by experts over a considerable period. The introduction of AI and machine learning into photogrammetry, has smoothened all these processes to make them fast as well as more automation-friendly. Now with AI photogrammetry software, one can explore thousands of aerial images automatically to acquire accurate topographic maps and also in real-time 3D models. ... Errors in land surveys can be very expensive and lead to many complications, especially in construction, farming, and city management. Using AI-based photogrammetry increases accuracy in measurement and reduces human errors in the process. AI algorithms improve the quality of the resultant maps and models by identifying and rectifying any anomalies in the data automatically. The system can also blend images from different sources, such as aerial pictures, LiDAR departments, as well as satellite images, to provide a better and more accurate picture of the land.


Will AI Kill Google? Past Predictions of Doom Were Totally Wrong

Sam Altman, the top executive overseeing ChatGPT, has said that AI has a good shot at shoving aside Google search. Bill Gates predicted that emerging AI will do tasks like researching your ideal running shoes and automatically placing an order so you'll "never go to a search site again." ... AI definitely could draw us away from Google in ways that smartphones and social media didn't. When you're planning a garden, an AI helper might guide you through where you want the flowers and fruit trees and hire help for you. No Googling necessary. "People are increasingly turning to ChatGPT to find information from the web, including the latest news," Altman's company, OpenAI, said. Maybe it's right to extrapolate from how people are starting to use AI today. Or maybe that's the mistake that Jobs made when he said no one was searching on iPhones. It wasn't wrong in 2010, but it was within a few years. Or what if AI upends how billions of us find information and we still keep on Googling? "The notion that we can predict how these new technologies are going to evolve is silly," said David B. Yoffie, a Harvard Business School professor who has spent decades studying the technology industry. 


Practical strategies to build an inclusive culture in cybersecurity

Despite meaningful progress, the cybersecurity and IT industries continue to face significant challenges in creating truly inclusive environments. Unconscious bias remains a pervasive issue, often influencing hiring, evaluation, and promotion processes, which can disadvantage women and other underrepresented groups. Retention is another ongoing challenge, as many organizations struggle to cultivate workplace cultures that are welcoming and supportive enough to retain diverse talent long-term. Barriers to entry and advancement persist, highlighting the need for continuous improvement and active intervention. While the industry has made strides in recognizing the importance of diversity, achieving full representation and inclusivity requires sustained commitment and effort. The current focus on diversity is encouraging, but only through consistent attention and action will the industry overcome these longstanding challenges and ensure a more equitable future. ... Work-life balance is another significant issue, particularly in cultures where traditional gender roles are still prevalent. Women often face greater expectations regarding balancing work and family, which can impact their career trajectory, especially in environments that lack flexible work arrangements. 


5 ways to achieve AI transformation that works for your business

"Never work in a silo and prepare to be wrong in terms of how you've set the technology up." Kollnig and her colleagues have implemented the Freshworks Customer Service Suite, an omnichannel support software with AI-powered chatbots and ticketing. She told ZDNET that working closely with the technology partner has helped her team to deliver a successful AI transformation. "So, for one of our AI projects, we established our basic set-up and said, 'Freshworks, come in and audit it. Tell us, are we doing this right? Would you do it differently?'" she said. ... Moyes said professionals in all sectors should take some sensible steps, including working with people who know more about AI. "Within every organization, there are groups of technology leads who are interested and want to innovate, evolve, and push," he said. "Lean on them. Learn from those at the coal face who want to do AI. There are no guarantees that the technologies you introduce will be the next best thing, but at least you'll be aware of the potential." Moyes said SimpsonHaugh is looking at how AI can reduce time-intensive tasks, such as summarizing text, and help staff find images to create early-stage design proposals.


What Does Enterprise-Wide Cybersecurity Culture Look Like?

Whoever is championing enterprise-wide security needs to secure buy-in from everyone within an organization. At the top, that means getting the C-suite and board to throw their weight behind security. “At the end of the day, if you don't have the CEO on board and the CEO isn't … voicing the same level of prioritization, then it will be something that's viewed as a half step back from … fundamental business priorities,” Cannava warns. Effective communication is a big part of getting that buy-in from leadership. How can security leaders explain to their boards and fellow executives that security is an essential business enabler? “Really [convert] the technology language or cyber language or jargon into how will … that risk potential impact revenue or reputation or our compliance?” says Landen. Tabletop exercises can be a powerful way to not just tell but show executives the value of cybersecurity. Walking through various cybersecurity incident scenarios can demonstrate the vital connection security has to operations and business outcomes. Ping Identity periodically engages multiple members of the C-suite in these exercises. “Not only do you know learn what the gap is, you also learn by doing … you're pulled in and engaged as a member of the C-suite, and now you're invested,” he says.



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - November 25, 2024

GitHub Copilot can make inline code suggestions in several ways. Give it a good descriptive function name, and it will generate a working function at least some of the time—less often if it doesn’t have much context to draw on, more often if it has a lot of similar code to use from your open files or from its training corpus. ... Test generation is generally easier to automate than initial code generation. GitHub Copilot will often generate a reasonably good suite of unit tests on the first or second try from a vague comment that includes the word “tests,” especially if you have an existing test suite open elsewhere in the editor. It will usually take your hints about additional unit tests, as well, although you might notice a lot of repetitive code that really should be refactored. Refactoring often works better in Copilot Chat. Copilot can also generate integration tests, but you may have to give it hints about the scope, mocks, specific functions to test, and the verification you need. ... GitHub Copilot Code Reviews can review your code in two ways, and provide feedback. One way is to review your highlighted code selection (Visual Studio Code only, open public preview, any program­ming language), and the other is to more deeply review all your changes. Deep reviews can use custom coding guidelines.


Closed loop optimisation: Opening a world of advantages for marketers

In marketing, closed loop optimisation refers to the collection and analysis of various data across the marketing lifecycle or customer journey to create a continuous cycle of learning and data-led decision-making. By closing the customer journey loop, starting with the first interaction all the way to “post-sale”, brand marketers can evaluate the effectiveness of advertising campaigns and channels, and deploy their resources in initiatives that deliver the best outcomes. ... With advanced analytics solutions, marketing organisations can process structured and unstructured data from internal and external sources to identify emerging trends, customer needs and behaviours, and other metrics that can inform brand strategies. When a health technology company understood with the help of analytics that user-generated content was a key factor in strengthening interactions with customers, it changed the content strategy to include user feedback, and thereby fostered a sense of community, improved credibility, and elevated the brand experience to substantially increase social media engagement within eighteen months. A top U.S. professional basketball team used predictive analytics to uncover new trends and understand the type of content that would resonate best with fans around the world.


The rise of autonomous enterprises: how robotics, AI, and automation are reshaping the workforce of tomorrow

An autonomous enterprise is an organisation that has successfully implemented the best application of automation technologies to function with minimal human intervention in most aspects. From routine administrative tasks to complex decision-making processes, autonomous enterprises leverage AI, ML, and RPA to drive efficiency, accuracy, and agility. Companies across sectors such as manufacturing, healthcare, logistics, and more, are looking towards automation to streamline operations, reduce costs, and innovate. ... As human-machine collaboration grows, there is an increasing need for employers and educational institutions to address reskilling and upskilling to prepare the workforce in continuously changing labour markets. This does not mean this work will eliminate human jobs but will definitely require more creativity, critical thinking, and emotional intelligence among human employees—the very qualities AI cannot encapsulate. ... As Robotics and AI continue to revolutionise the world the ethical and governance challenges arising from it have to be responded, proactively and thoughtfully. Privacy, bias, and accountability issues have to be strongly addressed so that these technologies are developed and deployed appropriately. 


Overcoming legal and organizational challenges in ethical hacking

A professional ethical hacker must have a broad understanding of various IT systems, networking, and protocols – essentially, a deep “under the hood” knowledge. This foundational expertise allows them to navigate different environments effectively. Additionally, target-specific knowledge is crucial, as the security measures and vulnerabilities can vary significantly based on the technology stack in use. ... AI and machine learning can significantly enhance ethical hacking efforts. On the offensive side, automated processes supported by AI can efficiently identify vulnerabilities and suggest areas for further manual security testing. This streamlines the initial phases of penetration testing and helps uncover potential issues more effectively. Additionally, AI can assist in generating detailed penetration testing reports, saving time and ensuring accuracy. On the defensive side, AI and machine learning are invaluable for detecting anomalies and correlating data to identify potential threats. These technologies enable a proactive approach to cybersecurity, enhancing both offensive and defensive strategies. By using AI and machine learning, ethical hackers can improve their effectiveness. 


Why The Gig Economy Is A Key Target For API Attacks

One of the most difficult attacks to prevent is business logic abuse. Strictly speaking, it isn’t an attack at all. Business logic abuse sees the functionality of the API used against it, so that a task it is supposed to execute is then used to carry out an attack. It might be use to subvert access control, for instance, with attackers manipulating URLs, session tokens, cookies, or hidden fields to gain advanced privileges and access sensitive data or functionality. Or bots may attempt to repeatedly sign up, login, or execute purchases in order to validate credentials, access unauthorised data, or commit fraud. Perhaps flaws in session tokens or poor handling of session data allows the attacker to hijack sessions and escalate privileges. Or the attacker may try to bypass built-in constraints to business logic by reviewing points of entry such as form fields and coming up with inputs that the developers may not have planned for. ... Legacy app defences rely on embedding javascript code into end-user applications and devices, which slows deployment and leaves platforms vulnerable to reverse engineering. Some of this code, such as CAPTCHAs, also introduces customer friction. 


From Contractors to OAuth: Emerging SDLC Threats for 2025

Outsourcing software development is common practice but opens the door to significant security risks when not properly managed. These outsourced operations lack the same stringent security measures applied to internal teams, creating blind spots that attackers can easily leverage. A common vulnerability in this scenario is the over-provisioning of access rights. ... Poorly configured CI/CD pipelines are another critical weakness. When organizations outsource software development, they often have little visibility into the security practices of their contractors’ environments. Attackers can exploit poorly configured pipelines to access source code or manipulate software delivery processes. ... Preventing OAuth phishing can be difficult because it exploits user behavior rather than traditional technical vulnerabilities. While phishing training is essential, the best defense is limiting the damage attackers can cause if they gain access. By restricting developer entitlements to only what is necessary for their role, organizations can reduce the impact of a compromised account and prevent broader system breaches. ... The most catastrophic SDLC security breaches in 2025 may not stem from technical vulnerabilities but from poorly managed development teams.


In a Growing Threat Landscape, Companies Must do Three Things to Get Serious About Cybersecurity

From a practical standpoint, execs and the board make budget decisions about every domain, including security. Unlike other domains, cybersecurity isn’t a profit center for most businesses, so it often gets underfunded compared to business units and projects that generate revenue. That’s a problem. If executives understand how much is at stake from a fundamental business level, they will invest in bolstering their cybersecurity posture. Cybersecurity is essential to protecting profit centers and enabling them to safely grow. And more and more, customers are looking at a company’s security bonafide when making their buying decisions. It’s in the execs’ self-interest to take charge in adopting a cybersecurity posture as they will ultimately be held accountable in the event of catastrophe. ... It’s also essential to have an honest, objective CISO at the helm of cybersecurity who has power at the executive table. The C-suite and board won’t ever know how to effectively prioritize security unless they have a CISO guiding them accordingly. Communication is central here. There has to be open discussion between the CISO and the rest of the C-suite regularly. 


Perimeter Security Is at the Forefront of Industry 4.0 Revolution

Perimeter security is crucial for military, government organizations and other business enterprises alike to detect potential threats, deter the possible intruders, and delay the illegal attempts which the intruders make while breaching in a secured area/perimeter. Additionally, perimeter security maintains the operational continuity within these organizations. To prevent unauthorized entry to the premises, high-security associations, commercial centers, government facilities and other organizations can establish a physical barrier utilizing detection and deterrence techniques.... The effectiveness of the perimeter security system depends upon several factors such as design and implementation of the security measures, proper integration of physical and electronic devices and expertise of a well-trained personnel. A well-designed perimeter security system should provide a comprehensive coverage of any building/premise with multiple layers of security which can be effective against intruders/thieves in creating obstacles. Regular maintenance and testing of the perimeter security system is necessary to ensure their continued efficiency. It is critical to continuously assess and expand perimeter security measures in order to counter different types of threats and hazards.


5 Trends Reshaping the Data Landscape

Before companies can successfully leverage AI and advanced analytics, it’s urgent to address the “runaway data movement and data pipeline challenges that are so common in enterprises,” he pointed out. “When you think about data movement and data pipelines, most customers have transactional systems or legacy environments that then feed data to downstream systems. Or they’re getting a firehose of data from a variety of sources that are coming from the cloud, and they can be batch or streaming data.” What happens is these organizations “take that data and transform or consume it by multiple business units using their own extract, transform, and load (ETL) solutions,” he illustrated. “They can be completely different types of data. This is typically the first kind of deviation or loss of a unified source of truth for the data.” The ETL solutions that each group manages “have their own user acceptance testing or production environments, which means more copies of data,” he pointed out. “Then that data is fed to multiple systems, maybe for dashboarding or for more low-latency analytics. But it’s also fed to their systems, like OLAP systems or data lakes.” If a data team “can’t get the data where it needs to go, they’re not going to be able to analyze it in an efficient, secure way,” he said.


Top challenges holding back CISOs’ agendas

With limited resources and an ever-growing list of threats, CISOs are often caught managing multiple projects at once. Some of these might move forward bit by bit, but without clear milestones or measurable progress, it’s difficult to show their real impact. This makes it harder for CISOs to secure extra funding or support, especially when stakeholders can’t see solid, tangible results. “That makes it almost impossible to show meaningful success,” says John Terrill, CSO at Phosphorus. “A lot of times, this can come from trying to boil the ocean.” Many CISOs recommend learning to “speak business” and occasionally scaring the board to get more funding, but these can only go so far. “The company has a finite amount of resources; you need to make peace with that,” Avivi says. ... “Aligning both the workforce and the organization’s leadership around risk appetite helps tremendously to focus your energy and your dollars in the places that most need them,” says Ken Deitz, CISO at Secureworks. “If an organization has a stated risk appetite for security risk, the priorities start to jump off the page.” CISOs should be open about the risk the organization will take if their priorities are not addressed. 



Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman

Daily Tech Digest - November 24, 2024

AI agents are unlike any technology ever

“Reasoning” and “acting” (often implemented using the ReACT — Reasoning and Acting) framework) are key differences between AI chatbots and AI agents. But what’s really different is the “acting” part. If the main agent LLM decides that it needs more information, some kind of calculation, or something else outside the scope of the LLM itself, it can choose to solve its problem using web searches, database queries, calculations, code execution, APIs, and specialized programs. ... Since the dawn of computing, the users who used software were human beings. With agents, for the first time ever, the software is also a user who uses software. Many of the software tools agents use are regular websites and applications designed for people. They’ll look at your screen, use your mouse to point and click, switch between windows and applications, open a browser on your desktop, and surf the web — in fact, all these abilities exist in Anthropic’s “Computer Use” feature. Other tools that the agent can access are designed exclusively for agent use. Because agents can access software tools, they’re more useful, modular, and adaptable. Instead of training an LLM from scratch, or cobbling together some automation process, you can instead provide the tools the agent needs and just let the LLM figure out how to achieve the task at hand.


Live On the Edge

Why live on the edge now? Because, despite public cloud usage being ubiquitous, many deployments are ad hoc and poorly implemented. “The focus of refactoring cloud infrastructure should be on optimizing costs by eliminating redundant, overbuilt or unused cloud infrastructure,” says Gartner. ... Can edge computing also benefit the environment? Yes, according to a study by IBM Corp. “One direct way is by using edge computing to monitor protected species of wildlife inhabiting remote places,” IBM says. “Edge computing can help wildlife officials and park rangers identify and stop poaching activities, sometimes before these offenses even occur.” Another relates to energy management. “Edge computing supports the use of smart grids, which can deliver energy more efficiently and help businesses leave a smaller carbon footprint,” IBM notes. “Grid or distributed computing is where a group of machines and networks work together for a common computing purpose. Resources are utilized in an optimized manner, thus reducing the amount of waste that can occur when large quantities of power are consumed.” More significantly, edge computing can also support the remote monitoring of oil and gas assets. 


Getting started with AI agents (part 1): Capturing processes, roles and connections

An organizational chart might be a good place to start, but I would suggest starting with workflows, as the same people within an organization tend to act with different processes and people depending on workflows. There are available tools that use AI to help identify workflows, or you can build your own gen AI model. I’ve built one as a GPT which takes the description of a domain or a company name and produces an agent network definition. Because I’m utilizing a multi-agent framework built in-house at my company, the GPT produces the network as a Hocon file, but it should be clear from the generated files what the roles and responsibilities of each agent are and what other agents it is connected to. Note that we want to make sure that the agent network is a directed acyclic graph (DAG). This means that no agent can simultaneously become down-chain and up-chain to any other agent, whether directly or indirectly. This greatly reduces the chances that queries in the agent network fall into a tailspin. In the examples outlined here, all agents are LLM-based. If a node in the multi-agent organization can have zero autonomy, then that agent paired with its human counterpart, should run everything by the human. 


Preparing Project Managers for an AI-Driven Future

Right now, about 95% of AI conversations are around tools that help people do their jobs better, like ChatGPT or other large language models. For most project managers, AI can be a huge timesaver. Think of it as a tool that takes on repetitive tasks—like summarizing meeting notes or helping with scheduling—so you can focus on higher-value work. ... AI can free you up to focus on the strategic parts of your job. It’s not here to replace project managers; it’s here to make them more efficient. At this moment, a lot of people are using AI from a personal or group productivity perspective. But they are increasingly going to depend on AI as part of their team. You’re already managing more AI than you might think. And in the future, you’ll be managing a lot more. Some things will be done by people and some things will be done by machines and we need to make sure the whole thing is happening in a totally planned way. ... First thing to understand is that AI projects are data projects. If you’re used to traditional software projects, where functionality is front and center, AI is different. AI relies on data quality—"garbage in, garbage out,” as they say. Your primary focus needs to be on getting the right data in and managing the outputs, which are data as well.


Making quantum computing accessible through decentralization

A decentralized model for quantum computing sidesteps many of these challenges. Rather than relying on centralized hardware-intensive setups, it distributes computational tasks across a global network of nodes. This approach taps into existing resources—standard GPUs, laptops, and servers—without needing the extreme cooling or complex facilities required by traditional quantum hardware. Instead, this decentralized network forms a collective computational resource capable of solving real-world problems at scale using quantum techniques. This decentralized Quantum-as-a-Service approach emulates the behaviors of quantum systems without strict hardware demands. By decentralizing the computational load, these networks achieve a comparable level of efficiency and speed to traditional quantum systems—without the same logistical and financial constraints. ... Decentralized quantum computing represents a transformative shift in how we approach advanced problem-solving. By leveraging accessible infrastructure and distributing tasks across a global network, powerful computing is brought within reach of many who were previously excluded. 


Data Security vs. Cyber Security – Why the Difference Matters

Cybersecurity is the practice of safeguarding digital systems, networks, and programs from attacks that aim to steal, alter, or destroy sensitive data, extort money through ransomware, or disrupt business operations. Despite a substantial $183 billion investment in traditional security measures in 2023 and projections indicating a 14% increase in these security budgets for 2024, data breaches surged by 78%, reaching a record high. ... Data is the most valuable commodity of a company, yet we don’t see resource allocation and time investment in data security reflecting this importance. Data security involves protecting the data itself. Once protected, the data can travel anywhere and remain protected. Having the fine granularity to safeguard the data allows you to grant users the minimum access necessary for their job functions. When someone does need to use the data, they must be authorized to do so. ... Zero trust data protection techniques significantly enhance data security posture and business value. The first step to improving security and data value is identifying the most at-risk yet least accessed data. It’s essential to assess the need for clear-text visibility of high-risk data across people, processes, and systems and to consider the business impact of minimizing this risk, including factors like regulatory compliance, reputation, and insurance.


Is Your Phone Spying On You? How to Check and What to Do

“For years, people have noticed advertisements for products they recently discussed in conversation — even without searching for them online — suddenly appear on their devices. While many dismissed this as a coincidence or attributed it to targeted advertising based on online searches, it turns out there’s more to the story. According to a report by 404 Media, a marketing firm has confirmed that smartphones are not just tracking users' online activity — they are also listening to what you say out loud, near your phone. “Smartphones might indeed be listening to our conversations, thanks to a technology known as “active listening.” This unsettling discovery comes after a marketing firm, whose clients include tech giants like Google and Facebook, admitted to using software that monitors users’ conversations through the microphones of their devices. This admission has raised serious questions about privacy, user consent, and the ethics of targeted advertising. … For better or for worse, there is generally nothing illegal about using audio information to target advertising. While it is obviously illegal to spy on someone without their consent, most phone users have given their permission for this practice without knowing, according to legal experts.


CNCF Brings Jaegar and OpenTelemetry Closer Together to Improve Observability

In the wake of adding support for OpenTelemetry, the project is now working on revamping the user interface for Jaegar to make that data more easily discoverable in addition to normalizing dependency views. In addition, the project is moving toward adding support for the Storage v2 interface to consume OpenTelemetry data natively along with adding support for ClickHouse as the official storage backend for tracing data. Finally, the project intends to add support for Helm Charts and an Operator that will make deploying Jaegar on Kubernetes clusters simpler. ... The challenge, of course, has been first finding the funding for observability initiatives, followed then by the issues that arise as DevOps teams move to consolidate tooling. Many software engineers naturally become attached to a particular monitoring tool. Convincing them to swap it out for another platform requires effort and, most importantly, training. Each organization will individually decide to what degree they may want to drive tool consolidation, however, in many cases, the cost of acquiring an observability platform assumes savings will be generated by eliminating the need for other tools.


Zero Days Top Cybersecurity Agencies' Most-Exploited List

The prevalence of zero-day vulnerabilities on this year's list is a reminder that attackers regularly seek ways of exploiting widely used types of software and hardware before vendors identify the underlying flaw and fix it. The joint security advisory also details guidance prepared by CISA and the National Institute of Standards and Technology designed to improve organizations' cyber resilience to better combat all types of cybersecurity threats. Specific recommendations also include regularly using automated asset discovery to find all of the hardware, software, systems and services inside an IT organization's estate and locking them down as much as possible; prepping and testing incident response plans; and keeping regular, secure backups of copies which get stored off-network to facilitate rapid repair and restoration of systems. The guidance also recommends implementing zero trust network architecture, using phishing-resistant multifactor authentication as an identity and access management control, enforcing least-privileged access, and reducing the number of third-party applications and unique types of builds used.


Achieving Optimal Outcomes in Security Through Platformization

Platformization unifies multiple solutions and services into a single architecture with a shared data store and streamlined management. With native integrations, each component becomes more powerful than standalone products. This approach helps increase productivity, simplify operations, and extract the most value from data, all leading to better security outcomes and greater efficiency. ... Using the platform approach should never entail giving up security efficacy for the sake of vendor consolidation or simplified management. If there is a corresponding set of point products in a given area, the minimum bar by which the “platform” component must be measured is the very best of those individual tools. Flexibility and scalability are important. A platform needs to empower your company to gradually grow into using it. A total “rip and replace” of multiple security tools at once is far more complex than most enterprises are willing to attempt. It’s even harder when you factor in the differing replacement cycles of existing solutions. You need the option to adopt the platform piece by piece or all at once – whichever suits your organization best – while retaining the ability to cover all your security bases.



Quote for the day:

“Opportunities don’t happen, you create them.” -- Chris Grosser

Daily Tech Digest - November 23, 2024

AI Regulation Readiness: A Guide for Businesses

The first thing to note about AI compliance today is that few laws and other regulations are currently on the books that impact the way businesses use AI. Most regulations designed specifically for AI remain in draft form. That said, there are a host of other regulations — like the General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA), and the Personal Information Protection and Electronic Documents Act (PIPEDA) — that have important implications for AI. These compliance laws were written before the emergence of modern generative AI technology placed AI onto the radar screens of businesses (and regulators) everywhere, and they mention AI sparingly if at all. But these laws do impose strict requirements related to data privacy and security. Since AI and data go hand-in-hand, you can't deploy AI in a compliant way without ensuring that you manage and secure data as current regulations require. This is why businesses shouldn't think of AI as an anything-goes space due to the lack of regulations focused on AI specifically. Effectively, AI regulations already exist in the form of data privacy rules. 


Cloud vs. On-Prem AI Accelerators: Choosing the Best Fit for Your AI Workloads

Like most types of hardware, AI accelerators can run either on-prem or in the cloud. An on-prem accelerator is one that you install in servers you manage yourself. This requires you to purchase the accelerator and a server capable of hosting it, set them up, and manage them on an ongoing basis. A cloud-based accelerator is one that a cloud vendor makes available to customers over the internet using an IaaS model. Typically, to access a cloud-based accelerator, you'd choose a cloud server instance designed for AI. For example, Amazon offers EC2 cloud server instances that feature its Trainium AI accelerator chip. Google Cloud offers Tensor Processing Units (TPUs), another type of AI accelerator, as one of its cloud server options. ... Some types of AI accelerators are only available through the cloud. For instance, you can't purchase the AI chips developed by Amazon and Google for use in your own servers. You have to use cloud services to access them. ... Like most cloud-based solutions, cloud AI hardware is very scalable. You can easily add more AI server instances if you need more processing power. This isn't the case with on-prem AI hardware, which is costly and complicated to scale up.


Platform Engineering Is The New DevOps

Platform engineering has provided a useful escape hatch at just the right time. Its popularity has grown strongly, with a well-attended inaugural platform engineering day at KubeCon Paris in early 2024 confirming attendee interest. A platform engineering day was part of the KubeCon NA schedule this past week and will also be included at next year’s KubeCon in London. “I haven't seen platform engineering pushed top down from a C-suite. I've seen a lot of guerilla stuff with platform and ops teams just basically going out and doing a skunkworks thing and sneaking it into production and then making a value case and growing from there,” said Keith Babo, VP of product and marketing at Solo.io. ... “If anyone ever asks me what’s my definition of platform engineering, I tend to think of it as DevOps at scale. It’s how DevOps scales,” says Kennedy. The focus has shifted away from building cloud native technology, done by developers, to using cloud native technology, which is largely the realm of operations. That platform engineering should start to take over from DevOps in this ecosystem may not be surprising, but it does highlight important structural shifts.


Artificial Intelligence and Its Ascendancy in Global Power Dynamics

According to the OECD, AI is defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments.” The vision for Responsible AI is clear: establish global auditing standards, ensure transparency, and protect privacy through secure data governance. Yet, achieving Responsible AI requires more than compliance checklists; it demands proactive governance. For example, the EU’s AI Act takes a hardline approach to regulating high-risk applications like real-time biometric surveillance and automated hiring processes, whereas the U.S., under President Biden’s Executive Order on Safe, Secure, and Trustworthy AI, emphasizes guidelines over strict enforcement. ... AI is becoming the lynchpin of cybersecurity and national security strategies. State-backed actors from China, Iran, and North Korea are weaponizing AI to conduct sophisticated cyber-attacks on critical infrastructure. The deployment of Generative Adversarial Networks (GANs) and WormGPT is automating cyber operations at scale, making traditional defenses increasingly obsolete. In this context, a cohesive, enforceable framework for AI governance is no longer optional but essential. 


Why voice biometrics is a must-have for modern businesses

Voice biometrics are making waves across multiple industries. Here’s a look at how different sectors can leverage this technology for a competitive edge:Financial services: Banks and financial institutions are actively integrating voice verification into call centers, allowing customers to authenticate themselves with their voice, eliminating the need for secret words or pin codes. This strengthens security, reduces time and cost per customer call and enhances the customer experience. Automotive: With the rise of connected vehicles, voice is already heavily used with integrated digital assistants that provide handsfree access to in-car services like navigation, settings and communications. Adding voice recognition allows such in car services to be personalized for the driver and opens the possibilities of more enhancements such as commerce. Automotive brands can integrate voice recognition for offering seamless access to new services like parking, fueling, charging, curbside pick-up by utilizing in-car payments that boost security, convenience and customer satisfaction. Healthcare: Healthcare providers can use voice authentication to securely verify patient identities over the phone or via telemedicine. This ensures that sensitive information remains protected, while providing a seamless experience for patients who may need hands-free options.


When and Where to Rate-Limit: Strategies for Hybrid and Legacy Architectures

While rate-limiting is an essential tool for protecting your system from traffic overloads, applying it directly at the application layer — whether for microservices or legacy applications — is often a suboptimal strategy. ... Legacy systems operate differently. They often rely on vertical scaling and have limited flexibility to handle increased loads. While it might seem logical to apply rate-limiting directly to protect fragile legacy systems, this approach usually falls short. The main issue with rate-limiting at the legacy application layer is that it’s reactive. By the time rate-limiting kicks in, the system might already be overloaded. Legacy systems, lacking the scalability and elasticity of microservices, are more prone to total failure under high load, and rate-limiting at the application level can’t stop this once the traffic surge has already reached its peak. ... Rate-limiting should be handled further upstream rather than deep in the application layer, where it either conflicts with scalability (in microservices) or arrives too late to prevent failures. This leads us to the API gateway, the strategic point in the architecture where traffic control is most effective. 


Survey Surprise: Quantum Now in Action at Almost One-Third of Sites

The use cases for quantum — scientific research, complex simulations — have been documented for a number of years. However, with the arrival of artificial intelligence, particularly generative AI, on the scene, quantum technology may start finding more mainstream business use cases. In a separate report out of Sogeti (a division of Capgemini Group), Akhterul Mustafa calls an impending mashup of generative AI and quantum computing as the “tech world’s version of a dream team, not just changing the game but also pushing the boundaries of what we thought was possible.” ... The convergence of generative AI and quantum computing beings “some pretty epic perks,” Mustafa states. For example, it enables the supercharging of AI models. “Training AI models is a beastly task that needs tons of computing power. Enter quantum computers, which can zip through complex calculations, potentially making AI smarter and faster.” In addition, “quantum computers can sift through massive datasets in a blink. Pair that with generative AI’s knack for cooking up innovative solutions, and you’ve got a recipe for solving brain-bending problems in areas like health, environment, and beyond.”


How Continuous Threat Exposure Management (CTEM) Helps Your Business

A CTEM framework typically includes five phases: identification, prioritization, mitigation, validation, and reporting and improvement. In the first phase, systems are continuously monitored to identify new or emerging vulnerabilities and potential attack vectors. This continuous monitoring is essential to the vulnerability management lifecycle. Identified vulnerabilities are then assessed based on their potential impact on critical assets and business operations. In the mitigation phase, action is taken to defend against high-risk vulnerabilities by applying patches, reconfiguring systems or adjusting security controls. The validation stage focuses on testing defenses to ensure vulnerabilities are properly mitigated and the security posture remains strong. In the final phase of reporting and improvement, IT leaders gain access to security metrics and improved defense routes, based on lessons learned from incident response. ... While both CTEM and vulnerability management aim to identify and remediate security weaknesses, they differ in scope and execution. Vulnerability management is more about targeted and periodic identification of vulnerabilities within an organization based on a set scan window.


DevOps in the Cloud: Leveraging Cloud Services for Optimal DevOps Practices

A well-designed DevOps transformation strategy can help organizations deliver software products and their services quickly and reliably while improving the overall efficiency of their development and delivery processes. ... Cloud platforms facilitate the immediate provisioning of infrastructure components, including servers, storage units, and databases. This helps teams swiftly initiate new development and testing environments, hastening the software development lifecycle. Companies can see a significant decrease in infrastructure provisioning time by integrating cloud services. ... DevOps helps development and operations teams work together. Cloud platforms provide a central place for storing code, configurations, and important files so everyone can be on the same page. Additionally, cloud-based communication and collaboration tools streamline communication and break down silos between teams. ... Cloud services provide a pay-as-you-go system, so there is no need for a large upfront investment in hardware. This way, companies can scale their infrastructure according to their requirements, saving a lot of money. 


Reinforcement learning algorithm provides an efficient way to train more reliable AI agents

To boost the reliability of reinforcement learning models for complex tasks with variability, MIT researchers have introduced a more efficient algorithm for training them. The findings are published on the arXiv preprint server. The algorithm strategically selects the best tasks for training an AI agent so it can effectively perform all tasks in a collection of related tasks. In the case of traffic signal control, each task could be one intersection in a task space that includes all intersections in the city. By focusing on a smaller number of intersections that contribute the most to the algorithm's overall effectiveness, this method maximizes performance while keeping the training cost low. The researchers found that their technique was between five and 50 times more efficient than standard approaches on an array of simulated tasks. This gain in efficiency helps the algorithm learn a better solution in a faster manner, ultimately improving the performance of the AI agent. "We were able to see incredible performance improvements, with a very simple algorithm, by thinking outside the box. An algorithm that is not very complicated stands a better chance of being adopted by the community because it is easier to implement and easier for others to understand,"



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown

Daily Tech Digest - November 22, 2024

AI agents are coming to work — here’s what businesses need to know

Defining exactly what an agent is can be tricky, however: LLM-based agents are an emerging technology, and there’s a level of variance in the sophistication of tools labelled as “agents,” as well as how related terms are applied by vendors and media. And as with the first wave of generative AI (genAI) tools, there are question marks around how businesses will use the technology. ... With so many tools in development or coming to the market, there’s a certain amount of confusion among businesses that are struggling to keep pace. “The vendors are announcing all of these different agents, and you can imagine what it’s like for the buyers: instead of ‘The Russians are coming, the Russians are coming,’ it’s ‘the agents are coming, the agents are coming,’” said Loomis. “They’re being bombarded by all of these new offerings, all of this new terminology, and all of these promises of productivity.” Software vendors also offer varying interpretations of the term “agent” at this stage, and tools coming to market exhibit a broad spectrum of complexity and autonomy. ... Many of the agent builder tools coming to business and work apps require little or no expertise. This accessibility means a wide range of workers could manage and coordinate their own agents.


The limits of AI-based deepfake detection

In terms of inference-based detection, ground truth is never known and assumed as such, so detection is based on a one to ninety-nine percentage that the content in question is or is not likely manipulated. Inference-based platform needs no buy-in from platforms, but instead needs robust models trained on a wide variety of deepfaking techniques and technologies in various use cases and circumstances. To stay ahead of emerging threat vectors and groundbreaking new models, those making an inference-based solution can look to emerging gen AI research to implement such methods into detection models as or before such research becomes productized. ... Greater public awareness and education will always be of immense importance, especially in places where content is consumed that could potentially be deepfaked or artificially manipulated. Yet deepfakes are getting so convincing, so realistic that even storied researchers now have a hard time differentiating real from fake simply by looking at or listening to a media file. This is how advanced deepfakes have become, and they will only continue to grow in believability and realism. This is why it is crucial to implement deepfake detection solutions in the aforementioned content platforms or anywhere deepfakes can and do exist. 


Quantum error correction research yields unexpected quantum gravity insights

So far, scientists have not found a general way of differentiating trivial and non-trivial AQEC codes. However, this blurry boundary motivated Liu, Daniel Gottesman of the University of Maryland, US; Jinmin Yi of Canada’s Perimeter Institute for Theoretical Physics; and Weicheng Ye at the University of British Columbia, Canada, to develop a framework for doing so. To this end, the team established a crucial parameter called subsystem variance. This parameter describes the fluctuation of subsystems of states within the code space, and, as the team discovered, links the effectiveness of AQEC codes to a property known as quantum circuit complexity. ... The researchers also discovered that their new AQEC theory carries implications beyond quantum computing. Notably, they found that the dividing line between trivial and non-trivial AQEC codes also arises as a universal “threshold” in other physical scenarios – suggesting that this boundary is not arbitrary but rooted in elementary laws of nature. One such scenario is the study of topological order in condensed matter physics. Topologically ordered systems are described by entanglement conditions and their associated code properties. 


Towards greener data centers: A map for tech leaders

The transformation towards sustainability can be complex, involving key decisions about data center infrastructure. Staying on-premises offers control over infrastructure and data but poses questions about energy sourcing. Shifting to hybrid or cloud models can leverage the innovations and efficiencies of hyperscalers, particularly regarding power management and green energy procurement. One of the most significant architectural advancements in this context is hyperconverged infrastructure (HCI). As we know, traditionally data centers operate using a three-tier architecture comprising separate servers, storage, and network equipment. This model, though reliable, has clear limitations in terms of energy consumption and cooling efficiency. By merging the server and storage layers, HCI reduces both the power demands and the associated cooling requirements. ... The drive to create more efficient and environmentally conscious data centers is not just about cost control; it’s also about meeting the expectations of regulators, customers, and stakeholders. As AI and other compute-intensive technologies continue to proliferate, organizations must reassess their infrastructure strategies, not just to meet sustainability goals but to remain competitive.


What is a data architect? Skills, salaries, and how to become a data framework master

The data architect and data engineer roles are closely related. In some ways, the data architect is an advanced data engineer. Data architects and data engineers work together to visualize and build the enterprise data management framework. The data architect is responsible to visualize the blueprint of the complete framework that data engineers then build. ... Data architect is an evolving role and there’s no industry-standard certification or training program for data architects. Typically, data architects learn on the job as data engineers, data scientists, or solutions architects, and work their way to data architect with years of experience in data design, data management, and data storage work. ... Data architects must have the ability to design comprehensive data models that reflect complex business scenarios. They must be proficient in conceptual, logical, and physical model creation. This is the core skill of the data architect and the most requested skill in data architect job descriptions. This often includes SQL development and database administration. ... With regulations continuing to evolve, data architects must ensure their organization’s data management practices meet stringent legal and ethical standards. They need skills to create frameworks that maintain data quality, security, and privacy.


AI – Implementing the Right Technology for the Right Use Case

Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization. This is a theme that also emerged as cybersecurity automation matured – the need to identify the right use case for the technology, rather than try to apply it across the board.. ... That said, AI is and will continue to be a useful tool. In today’s economic climate, as businesses adapt to a new normal of continuous change, AI—alongside automation—can be a scale function for cybersecurity teams, enabling them to pivot and scale to defend against evermore diverse attacks. In fact, our recent survey of 750 cybersecurity professionals found that 58% of organizations are already using AI in cybersecurity to some extent. However, we do anticipate that AI in cybersecurity will pass through the same adoption cycle and challenges experienced by “the cloud” and automation, including trust and technical deployment issues, before it becomes truly productive. 


A GRC framework for securing generative AI

Understanding the three broad categories of AI applications is just the beginning. To effectively manage risk and governance, further classification is essential. By evaluating key characteristics such as the provider, hosting location, data flow, model type, and specificity, enterprises can build a more nuanced approach to securing AI interactions. A crucial factor in this deeper classification is the provider of the AI model. ... As AI technology advances, it brings both transformative opportunities and unprecedented risks. For enterprises, the challenge is no longer whether to adopt AI, but how to govern AI responsibly, balancing innovation against security, privacy, and regulatory compliance. By systematically categorizing generative AI applications—evaluating the provider, hosting environment, data flow, and industry specificity—organizations can build a tailored governance framework that strengthens their defenses against AI-related vulnerabilities. This structured approach enables enterprises to anticipate risks, enforce robust access controls, protect sensitive data, and maintain regulatory compliance across global jurisdictions. The future of enterprise AI is about more than just deploying the latest models; it’s about embedding AI governance deeply into the fabric of the organization.


Business Continuity Depends on the Intersection of Security and Resilience

The focus of security, or the goal of security, or the intended purpose of security in its most natural and traditional form, right before we start to apply it to other things, is to prevent bad things from happening, or protect the organization or protect assets. It doesn't necessarily have to be technology that does it. This is where your policies and procedures come into place. Letting users know what acceptable use policies are or what things are accepted when leveraging corporate resources. From a technology perspective, it's your firewalls, antivirus, intrusion detection systems and things of that nature. So, this is where we focus on good cyber hygiene. We're controlling the controllables and making sure that we're taking care of the things that are within our control. What about resilience? This one is near and dear to my heart. That's because I've been in tech and security for almost 25 years, and I've kind of gone through this evolution of what I think is important. We're trained as practitioners in this industry to believe that the goal is to reduce risk. We must reduce or mitigate cyber risk, or we can make other risk decisions. We can avoid it, we can accept it, or we can transfer it. But practically speaking, when we show up to work every day and we're doing something active, we're reducing risk.


How to stop data mesh turning into a data mess

Realistically, expecting employees to remember to follow data quality and compliance guidelines is neither fair nor enforceable. Adherence must be implemented without frustrating users, and become an integral part of the project delivery process. Unlikely as this sounds, a computational governance platform can impose the necessary standards as ‘guardrails’ while also accelerating the time to market of products. Sitting above an organisation’s existing range of data enablement and management tools, a computational governance platform ensures every project follows pre-determined policies, for quality, compliance, security, and architecture. Highly customisable standards can be set at global or local levels, whatever is required. ... While this might seem restrictive, there are many benefits from having a standardised way of working. To streamline processes, intelligent automated templates help data practitioners quickly initiate new projects and search for relevant data. The platform can oversee the deployment of data products by checking their compliance and taking care of the resource provisioning, freeing the teams from the burden of coping with infrastructure technicalities (on cloud or on-prem) and certifying data product compliance at the same time, before data products enter production. 


The SEC Fines Four SolarWinds Breach Victims

Companies should ensure the cyber and data security information they share within their organizations is consistent with what they share with government agencies, shareholders and the public, according to Buchanan Ingersoll & Rooney’s Sanger. This applies to their security posture prior to a breach, as well as their responses afterward. “Consistent messaging is difficult to manage given that dozens, hundreds or thousands could be responsible for an organization’s cybersecurity. Investigators will always be able to find a dissenting or more pessimistic outlook among the voices involved,” says Sanger. “If there is a credible argument that circumstances are or were worse than what the organization shares publicly, leadership should openly acknowledge it and take steps to justify the official perspective.” Corporate cybersecurity breach reporting is still relatively uncharted territory, however. “Even business leaders who intend to act with complete transparency can make inadvertent mistakes or communicate poorly, particularly because the language used to discuss cybersecurity is still developing and differs between communities,” says Sanger. “It’s noteworthy that the SEC framed each penalized company as having, ‘negligently minimized its cybersecurity incident in its public disclosures.’ 



Quote for the day:

"Perfection is not attainable, but if we chase perfection we can catch excellence." -- Vince Lombardi