Daily Tech Digest - November 28, 2024

Agentic AI: The Next Frontier for Enterprises

Agentic AI represents a significant leap forward. "These systems can perform complex reasoning, integrate with vast enterprise datasets and execute processes autonomously. For instance, a task like merging customer accounts, which traditionally required ticket creation and days of manual effort, can now be completed in seconds with agentic AI," said Arun Kumar Parameswaran ... Salesforce's Agentforce, unveiled at Dreamforce 2024, represents a significant milestone. Built on the company's Atlas reasoning engine and using models such as OpenAI's GPT-4 and Google's Gemini, Agentforce combines advanced AI with Salesforce's extensive ecosystem of customer engagement data. Agentforce marks the "third wave of AI," said Marc Benioff, CEO of Salesforce. He predicts a massive 1 billion AI agents by 2026. Unlike earlier waves, which focused on predictive analytics and conversational bots, this phase emphasizes intelligent agents capable of autonomous decision-making. Salesforce has amassed years of customer engagement data, workflows and metadata, making Agentforce a precision tool that understands and anticipates customer needs.


Get started with bootable containers and image mode for RHEL

Bootable containers, also provided as image mode for Red Hat Enterprise Linux, represent an innovation in merging containerization technology with full operating system deployment. At their core, bootable containers are OCI (Open Container Initiative) container images that contain a complete Linux system, including the kernel and hardware support. This approach has several characteristics, namely:Immutability: The entire system is treated as an immutable unit, reducing configuration drift and enhancing security (other than /etc and /var, all directories are mounted read-only once deployed on a physical or virtual machine). Atomic updates: System updates can be performed as atomic operations, simplifying rollbacks and ensuring system consistency. Standardized tooling: Leverages existing OCI container tools and workflows, reducing the learning curve for teams familiar with containerization, and the ability to design a complete OS environment using a Containerfile as a blueprint. This is a wonderful benefit for a variety of use cases, including edge computing and IoT devices (where consistent, easily updatable system images are crucial), as well as on general cloud-native infrastructure to enable infrastructure-as-code practices at the OS level.


Traditional EDR won't cut it: why you need zero trust endpoint security

The development of EDR tools was the next step in cyber resiliency after antivirus began falling behind in its ability to stop malware. The struggle began when the rate at which new malware was created and distributed far outweighed the rate at which they could be logged and prevented from causing harm. The most logical step to take was to develop a cybersecurity tool that could identify malware by actions taken, not just by code. ... cybercriminals are now using AI to streamline their malware generation process, creating malware at faster speeds and improving its ability to run without detection. Another crucial problem with traditional EDRs and other detection-based tools is that they do not act until the malware is already running in the environment, which leads them to fail customers and miss cyberattacks until it is already too late. ... With application allowlisting, you create a list of the applications and software you trust and need and block everything else from running. Allowlisting is a zero trust method of application control that prevents known and unknown threats from running on your devices, preventing cyberattacks, like ransomware, from detonating.


AI and the future of finance: How AI is empowering NBFCs

AI in Non-Banking Financial Companies can be used for one of the first applications – the evaluation of credit risk. Until now, lenders relied mainly on credit scoring models and legacy data on a client. However, such models often do not grasp the complexity of a person’s business’s financial profile, a common problem in countries with large informal economies. AI, on the other hand, can analyse large amounts of data, from historical transaction information to phone use and even social behaviour. AI algorithms are able to analyse this data at astonishing speed, recognising trends and yielding more precise forecasts about the borrower’s capability to pay back loans. This enables NBFCs to offer credit to a wider and more diverse client base, which ultimately drives financial inclusion. ... The function of AI extends beyond just providing transactional support. With the help of sophisticated machine-learning models, NBFCs are able to offer personalised financial products that are tailored to the financial behaviour of individual preferences, lifestyles, and conditions. ... By using advanced analytics and machine-learning models, NBFCs are able to identify new opportunities to grow. 


Achieving Success in the New Era of AI-Driven Data Management

AI-driven personalization is essential for companies looking to stand out in a competitive marketplace. By leveraging vast amounts of customer data, AI helps businesses create highly tailored experiences that adapt to individual user preferences, increasing engagement and loyalty. Recent research shows "that 81 percent of customers prefer companies that offer a personalized experience." ... AI-driven data analytics has significant ethical, privacy, and regulatory challenges. Ethical considerations, such as bias detection and mitigation, are necessary to ensure AI models provide fair and accurate outcomes. Implementing governance frameworks and transparency in AI decision-making builds trust by making algorithms' logic accessible and accountable, minimizing the risk of unintended discrimination in data-driven insights. Data privacy and security are equally critical. The increased use of techniques like differential privacy raises expectations of high privacy standards. Differential privacy adds carefully calibrated "noise" to data sets — random variations designed to prevent the re-identification of individuals while still allowing accurate aggregate insights. 


Riding the wave of digital transformation: Insights and lessons from Japan’s journey

Availability and accessibility of digital infrastructure is often inadequate in developing countries, preventing digital services from reaching everyone. Japan’s experience in this domain ranges from formulating national strategies for digital infrastructure development to providing affordable high-speed internet access, and to integrating and standardizing different systems. The key takeaway here is the importance of sustaining robust infrastructure investment over a period of time and providing room for digital system scalability and flexibility. ... With this in mind, Japan embraced innovative approaches to enhance people’s digital skills. Some cities like Kitakyushu are training staff to use minimal coding tools—software that allows them to design applications with simple codes— as well as providing other training on digital transformation to equip staff at various levels within local governments with relevant skills. ... Digital transformation relies on coordinated efforts: the Japanese central government established supportive policies and frameworks, while local governments translated these into actionable initiatives for public benefit. 


When Hackers Meet Tractors: Surprising Roles in IoT Security

IoT encompasses the billions of connected devices we use daily - everything from smart home gadgets to fitness trackers. IIoT focuses on industrial applications, such as manufacturing robots, energy grid systems and autonomous vehicles. While these technologies bring remarkable efficiencies, they also expand the potential attack surface for cybercriminals. Ransomware, data breaches, and system takeovers are no longer just concerns for tech companies - they’re threats to every industry that relies on connectivity. ... Breaking into IoT and IIoT cybersecurity may seem daunting, but the pathway is more accessible than you might think. Leverage transferable skills. Many professionals transition into IoT/IIoT roles by building on their existing cybersecurity expertise. For instance, knowledge of network security or ethical hacking can be adapted to these environments. It is also beneficial to pursue specialized certifications that can demonstrate your expertise and open doors in niche fields. ... GICSP is designed specifically for professionals working in industrial environments, such as manufacturing, energy, or transportation. It bridges the gap between IT, OT (Operational Technology), and IIoT, emphasizing the secure operation of industrial control systems.


How to Ensure Business Continuity for Banks and Financial Services

A business continuity plan is only as effective as the people behind it. Creating a culture of safety and preparedness throughout a financial services organization is key to a successful crisis response. Regular training sessions, disaster simulations, and frequent updates to the BCP keep teams ready and capable of responding efficiently. Facilities teams must have a clear understanding of their roles and responsibilities during a disruption. From decision-makers to on-the-ground personnel, each team member should know exactly what steps to take to restore operations. Clear protocols ensure that recovery efforts can be executed quickly, minimizing service interruptions and maintaining a seamless customer experience. Disasters may be inevitable, but with the right facilities management strategies in place, financial service companies can be well-prepared to respond effectively and ensure business continuity. From conducting risk assessments to leveraging technology and building strong vendor partnerships, proactive facilities management can be the difference between a rapid recovery and prolonged downtime. Now is the time to assess the current state of facilities, ensure teams are trained, and confirm that business continuity plans are robust. 


Enterprises Ill-prepared to Realize AI’s Potential

To build more AI infrastructure readiness, skilled talent will be key to overcoming a deficit in workers needed to maintain IT infrastructure, Patterson suggests. In fact, only 31% of companies believed their talent was in a “high state of readiness” to fully make use of AI. In addition, 24% of those surveyed did not believe their companies held enough talent to address the “growing demand for AI,” the Cisco report revealed. Expanding the AI talent pool will require forming a learning culture for innovation, he says. That includes talent development and forming clear career paths. Leadership feels the pressure to achieve AI readiness, but workers are hesitant to use AI, according to the Cisco AI readiness report. “While organizations face pressure from leadership to bring in AI, the disconnect is likely due to hesitancy among workers within the organization who must take steps to gain new skills for AI or fear AI taking over their jobs,” Patterson says. ... “If you can’t secure AI, you won’t be able to successfully deploy AI,” he says. Meanwhile, tech professionals should develop a holistic view of the infrastructure required to adopt AI while incorporating observability and security, according to Patterson. A holistic view of infrastructure will bring “easier operations, resiliency, and efficiency at scale,” Patterson says.


The Role of Edge-to-Cloud Infrastructure in Shaping Digital Transformation

Unlike the cloud transporting data to the cloud for processing, Edge infrastructure brings the distributed computing network closer to the users–and is powered by local, small computing power near the end- user and relies on the cloud only as a ‘director’ of operations. This Edge-to-cloud computing model allows IoT devices to stay small and affordable. It also allows localized computing power to expedite data processing across many applications without relying on high throughput and consistent connectivity to a cloud hyper-scale or other data center hundreds or thousands of miles away. ... The key to edge computing is handling sizeable amounts of data that IoT devices can produce in conjunction with the existing inbuilding systems that would be difficult, risky, or cost-prohibitive to supplant.  Given IoT devices and existing systems often provide raw and isolated data – IoT platforms consolidate, aggregate, and then analyze data in real-time, or farm it out to external tools in the cloud for specific needs (work order management, MOPs, etc.). The key here is not just about real-time context, given that IoT platforms provide a database of historical information – true actionable outcomes can be driven from data.  



Quote for the day:

"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch

Daily Tech Digest - November 27, 2024

Cybersecurity’s oversimplification problem: Seeing AI as a replacement for human agency

One clear solution to the problem of technology oversimplification is to tailor AI training and educational initiatives towards diverse endpoints. Research clearly demonstrates that know-how of the underlying functions of security professions has a real mediating effect on the excesses of encountering disruptive, unfamiliar conditions. The mediation of this effect by the oversimplification mentality, unfortunately, suggests that more is required. Specifically, discussion of the foundational functionality of AI systems needs to be married to as many diverse outcomes as possible to emphasize the dynamism of the technology. ... Naturally, one of the value propositions of studies like the one presented here is the ability for professionals to see the world as another kind of professional might. Whilst tabletop exercises are already a core tool of the cybersecurity profession, there are opportunities to incorporate comparative applications’ learning for AI using simple simulations. ... Finally, wherever possible, role rotation is of clear advantage to overcoming the issues illustrated herein. In testing, the diversity of career roles over and above career length played a similar role in mitigating the excesses of the impact of novel conditions on response priorities.


How to Create an Accurate IT Project Timeline

Building resilient project plans that can handle unforeseen, yet often inevitable changes, is key to ensuring timeline accuracy. "Understanding dependencies, identifying bottlenecks, and planning delivery around these constraints have shown to be important for timeline accuracy," Chandrasekar says. Project accuracy also depends on clear communication and tracking. "It's critical to consistently review timelines with your project team and stakeholders, making updates as new information is discovered," Naqib says. He adds that project timelines should be tracked with the support of a work management tool, such as SmartSheet or Jira, in order to measure progress and identify gaps. Yet even with perfect planning, unanticipated delays or changes may occur. Proper planning and communication are key to assuring timeline accuracy, says Anne Gee, director of delivery excellence for IT managed services at data and technology consulting firm Resultant. ... The best way to get a lagging timeline back on schedule is to work with your project team to identify the root cause, Naqib advises. "Then, you can work with your team and your greater organization to explore possible resolution accelerators that will keep your timeline on track."


Shaping the Future of AI Benchmarking – Trends & Challenges

AI benchmarking serves as a foundational tool for evaluating and advancing artificial intelligence systems. Its primary objectives address critical aspects of AI development, ensuring that models are efficient, effective, and aligned with real-world needs. ... Benchmarks provide valuable insights into a model’s limitations, serving as a roadmap for enhancement. For instance: Identifying Bottlenecks: If a model struggles with inference speed or accuracy on specific data types, benchmarks highlight these areas for targeted optimization. Algorithm Development: Benchmarks inspire innovation by exposing gaps in performance, encouraging the development of new algorithms or architectural designs. Data Quality Assessment: Poor performance on benchmarks may indicate issues with training data, prompting better preprocessing, augmentation, or dataset refinement techniques. ... AI benchmarking involves a systematic process to evaluate the performance of AI models using rigorous methodologies. These methodologies ensure that assessments are fair, consistent, and meaningful, enabling stakeholders to make informed decisions about model performance and applicability.


Why data is the hottest commodity in cybersecurity

“The value of data has skyrocketed in recent years, transforming it into one of the most sought-after commodities in the digital age. The rise of AI and machine learning has only amplified the threat to data, as attackers can now automate their efforts and create more sophisticated and targeted campaigns.” Saceanu noted that Irish organisations, like those globally, are struggling to secure their systems and private information, with industries that typically hold sensitive data, such as those in healthcare, finance and education, being particularly vulnerable. “We have seen a massive focus on targeting organisations that operate in critical infrastructure for various motivations – financially oriented or to disrupt operations. This means that there are more and more ransomware attacks on manufacturing, energy and healthcare that are not only encrypting data, but also exfiltrating this data to ask for enormous ransom payments because they know that these organisations cannot afford any disruption.” For Saceanu, this shift to an environment driven by data and under near constant threat has led organisations to experiment with advanced technologies such as AI in order to improve efficiency and spearhead innovation


Proper ID Verification Requires Ethical Technology

When it comes to identity security, security teams should regularly monitor, identify, analyze, and report risks in their environment. If exploited, these risks can be detrimental to an organization, its assets, and stakeholders. They can also undercut ethical standards of privacy and data protection. Running risk assessments is especially important when there is a lack of visibility in company processes and security gaps. Organizations can systematically assess their security measures surrounding user identity data and ensure compliance with privacy policies and regulatory standards. ... Transparency is among the most vital aspects of ethical identity verification. It requires organizations to be upfront about how they practice data collection and management, and how the data is used. This has to be reflected in the company policies, culture, and of course, its technology, including data storage and access. Users, i.e., customers from whom data is collected, should be able to access the policy terms easily at any point. ... When companies are looking to procure ethical technology, it’s important to account for factors like privacy, accessibility, security, and regulations. The above factors look at the perspective of the company using the tech and how they should operate it. 


Accelerating Business Growth Using AIOps and DevOps

The rapid evolution of AI brings forth several new potential opportunities and challenges. Today, AI drives the business growth of an enterprise in more ways than one. Artificial intelligence for IT Operations or AIOps is a new concept that encompasses big data, data mining, machine learning (ML) and AI. AIOps is a practice that blends AI with IT operations to improve operational processes. AIOps platforms automate, optimize and improve IT operations and provide users with real-time visibility and predictive alerts to minimize operational issues and proactively resolve issues that may have arisen to ensure ideal IT operations. ... Adopting AIOps helps DevOps through automation, predictive intelligence and better data-driven decisions. This collaboration fosters efficient processes, improved quality and continuous improvement to meet the ever-changing demands of the industry and customer requirements. ... AI makes it easier for DevOps teams to find patterns in data, make meaning from such data and form informed decisions on which resources and processes to allocate. The convergence of AIOps and DevOps processes can yield valuable insights that can help improve decision-making.


When is data too clean to be useful for enterprise AI?

Not cleaning your data enough causes obvious problems, but context is key. Google suggests pizza recipes with glue because that’s how food photographers make images of melted mozzarella look enticing, and that should probably be sanitized out of a generic LLM. But that’s exactly the kind of data you want to include when training an AI to give photography tips. Conversely, some of the other inappropriate advice found in Google searches might have been avoided if the origin of content from obviously satirical sites had been retained in the training set. “Data quality is extremely important, but it leads to very sequential thinking that can lead you astray,” Carlsson says. “It can end up, at best, wasting a lot of time and effort. At worst, it can go in and remove signal from your data, and actually be at cross purposes with what you need.” ... AI needs data cleaning that’s more agile, collaborative, iterative and customized for how data is being used, adds Carlsson. “The great thing is we’re using data in lots of different ways we didn’t before,” he says. “But the challenge is now you need to think about cleanliness in every one of those different ways in which you use the data.” Sometimes that’ll mean doing more work on cleaning, and sometimes it’ll mean doing less.


Architectural Intelligence – The Next AI

The vast majority of software has deterministic outcomes. If this, then that. This allows us to write unit tests and have functional requirements. If the software does something unexpected, we file a bug and rewrite the software until it does what we expect. However, we should consider AI to be non-deterministic. That doesn’t mean random, but there is an amount of unpredictability built in, and that’s by design. The feature, not a bug, is that the LLM will predict the most likely next word. "Most likely" does not mean "always guaranteed". For those of us who are used to dealing with software being predictable, this can seem like a significant drawback. However, there are two things to consider. First, GenAI, while not 100% accurate, is usually good enough. ... When considering AI components in your system design, consider where you are okay with "good enough" answers. I realize we’ve spent decades building software that does what it’s expected to do, so this may be a complex idea to think about. As a thought exercise, replace a proposed AI component with a human. How would you design your system to handle incorrect human input? Anything from UI validation to requiring a second person’s review. What if the User in User Interface is an AI? 


The Impact of Advanced Data Lineage on Governance

Advanced data lineage (ADL) provides a powerful set of tools for understanding data’s history. It is proactive and preventative, addressing data issues at that moment or before they happen. Advanced data lineage represents a significant evolution where historically, traditional data lineage tracks data movement and transformations linearly. Consequently, organizations often receive static reports that quickly become outdated in fast-changing data environments. ... As ADL transforms how organizations understand and manage their data, it requires a corresponding evolution in data governance practices. This transformation requires more than selecting the right software; it applies an adaptive framework that supports efficient assessments and actions on lineage information. An adaptive Data Governance framework is flexible enough to respond quickly to new insights provided by ADL, while still maintaining a structured approach to data management. With this shift comes increased and frequent interactions between adaptive DG teams and other departments to resolve issues. To do this well, a framework should clearly define roles, responsibilities, and escalation paths when addressing issues identified by ADL. This approach is agile while maintaining a solid methodological foundation.


Navigating AI Regulations: Key Insights and Impacts for Businesses

The historical risks associated with AI highlight the need for careful consideration and proactive management as these technologies continue to evolve. Addressing these challenges requires collaboration among technologists, policymakers, ethicists, and society at large to ensure that the development and deployment of AI provides positive contributions to society while also minimizing potential harms. AI systems raise significant data privacy concerns because they collect and process vast amounts of personal data. Regulatory frameworks establish guidelines for data protection. These ensure an individuals’ information is handled secretly, responsibly, and with their full consent. AI systems must be understandable, fair, incorporate human judgment, and be ethical. Trustworthy AI systems should perform reliably across various conditions and be resilient to errors or attacks. Developers must comply with privacy laws and safeguard personal data used in training AI models. This includes obtaining user consent for data usage and implementing strong security measures to protect sensitive information.
 


Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman

Daily Tech Digest - November 26, 2024

Just what the heck does an ‘AI PC’ do?

As the PC market moves to AI PCs, x86 processor dominance will lessen over time, especially in the consumer AI laptop market, as Arm-based AI devices grab more share from Windows x86 AI and non-AI laptops, according to Atwal. “However, in 2025, Windows x86-based AI laptops will lead the business segment,” Atwal said. ... “We see AI-enabled PCs evolving to provide more personalized, adaptive experiences that are tailored to each user’s needs,” Butler said. “The rise of generative AI was a pivotal moment, yet reliance on cloud processing raises concerns around data privacy.” Each component of a PC plays a unique role in making AI tasks efficient, but the NPU is key for accelerating AI computations with minimal power consumption, according to Butler. In general, he said, AI PCs assist in or handle routine tasks to be more efficient and intuitive for users without the need to access an external website or service. ... AI PCs can also boost productivity by handling routine tasks such as scheduling and organizing emails, and by enhancing collaboration with real-time translation and transcription features, according to Butler. 


Humanity Protocol: ‘We’re building a full credential ecosystem’

Distinguishing between humans and machines online has become more important than ever. Over the past years, the digital world has seen a proliferation of AI-fueled deepfake impersonations, bots and Sybil attacks, in which a single entity creates many false identities to gain influence. An increasing number of companies are trying to come up with solutions relying on blockchain technology. One of the more well-known projects is World Network, previously known as Worldcoin, which scans irises to confirm their users are human. But the space is seeing more and more competitors relying on biometrics to prove people are real – including Humanity Protocol. “There are definitely a bunch of companies that are trying to solve the whole Proof of Personhood problem,” the company’s founder Terence Kwok told Biometric Update in an interview earlier this month. “We’re lucky to be one of the few that have started launching, building a user base and joined the market.” The company launched a testnet in October, allowing users and developers to get their first taste of the platform and receive some free cryptocurrency. The project has so far signed up over a million people – moving quickly to catch up with World Network which currently has 15 million users, including 7 million verified through its Orb iris-scanning technology.


The way we measure progress in AI is terrible

Benchmark creators often don’t make the questions and answers in their data set publicly available either. If they did, companies could just train their model on the benchmark; it would be like letting a student see the questions and answers on a test before taking it. But that makes them hard to evaluate. Another issue is that benchmarks are frequently “saturated,” which means all the problems have been pretty much been solved. For example, let’s say there’s a test with simple math problems on it. The first generation of an AI model gets a 20% on the test, failing. The second generation of the model gets 90% and the third generation gets 93%. An outsider may look at these results and determine that AI progress has slowed down, but another interpretation could just be that the benchmark got solved and is no longer that great a measure of progress. It fails to capture the difference in ability between the second and third generations of a model. One of the goals of the research was to define a list of criteria that make a good benchmark. “It’s definitely an important problem to discuss the quality of the benchmarks, what we want from them, what we need from them,” says Ivanova. “The issue is that there isn’t one good standard to define benchmarks. This paper is an attempt to provide a set of evaluation criteria. That’s very useful.”


Governance Considerations and Pitfalls When Implementing GenAI

Many large organizations are still in the process of establishing robust information governance frameworks for their current environments. Now, they must also address questions about their readiness to manage the impact of Copilot1 and similar generative AI tools. These questions include whether they can uphold appropriate access, use, and management across their IT infrastructure. Additionally, organizations should assess whether new artifacts are being created that could introduce unforeseen regulatory risk. ... With Copilot, anything a user has permission to access may surface as part of a response to a query or prompt. Without Copilot, when users are over-permissioned and have access to documents that they should not, they would only uncover the document if actively searching for it. Therefore, excess permissions and failure to limit access to certain materials can potentially expose information to far more employees than intended. To manage this, organizations must be diligent in defining controls and thoroughly understand the range of materials that Copilot users can access at different permission levels. Notably, when Copilot is turned on for a user, every application within Microsoft 365 that has a Copilot element will have AI activated. 


Next-Gen Networking: Exploring the Utility of Smart Routers in Data Centers

In cases where smart routers offer automated network management capabilities, they usually do so based on software that provides features like the ability to reroute packets to help balance network load or discover new devices automatically when they join the network. In this sense, smart routers don’t really do anything all that new; the sorts of capabilities just mentioned have long been a standard part of network management software. The only differentiator for smart routers, perhaps, is that these devices come bundled with software that enables them to help manage networks automatically, instead of requiring additional network management tools for that purpose. In addition, there seems to be a focus in smart router land on the notion of hands-off network management. Instead of requiring admins to configure networking policies and apply them manually, smart routers promise in many cases to manage your networks for you. It's essentially an example of what you might categorize as NoOps. It’s worth noting, too, that in more than a few cases, smart router vendors are slapping the “AI” label on their devices. But like many vendors who profess to be selling AI-powered solutions today, they're using the term loosely to refer to any type of software that uses data analytics in some sort of way.


Digitising India with AI-based photogrammetry software

Photogrammetry is the capturing of measurements from photographs shot by drones, satellites, or aerial photography and generating maps, and 3D models even up to including a Geographic Information System (GIS). Traditionally, photogrammetric processing involved collecting a huge amount of data through manual efforts with post-processing taken care of by experts over a considerable period. The introduction of AI and machine learning into photogrammetry, has smoothened all these processes to make them fast as well as more automation-friendly. Now with AI photogrammetry software, one can explore thousands of aerial images automatically to acquire accurate topographic maps and also in real-time 3D models. ... Errors in land surveys can be very expensive and lead to many complications, especially in construction, farming, and city management. Using AI-based photogrammetry increases accuracy in measurement and reduces human errors in the process. AI algorithms improve the quality of the resultant maps and models by identifying and rectifying any anomalies in the data automatically. The system can also blend images from different sources, such as aerial pictures, LiDAR departments, as well as satellite images, to provide a better and more accurate picture of the land.


Will AI Kill Google? Past Predictions of Doom Were Totally Wrong

Sam Altman, the top executive overseeing ChatGPT, has said that AI has a good shot at shoving aside Google search. Bill Gates predicted that emerging AI will do tasks like researching your ideal running shoes and automatically placing an order so you'll "never go to a search site again." ... AI definitely could draw us away from Google in ways that smartphones and social media didn't. When you're planning a garden, an AI helper might guide you through where you want the flowers and fruit trees and hire help for you. No Googling necessary. "People are increasingly turning to ChatGPT to find information from the web, including the latest news," Altman's company, OpenAI, said. Maybe it's right to extrapolate from how people are starting to use AI today. Or maybe that's the mistake that Jobs made when he said no one was searching on iPhones. It wasn't wrong in 2010, but it was within a few years. Or what if AI upends how billions of us find information and we still keep on Googling? "The notion that we can predict how these new technologies are going to evolve is silly," said David B. Yoffie, a Harvard Business School professor who has spent decades studying the technology industry. 


Practical strategies to build an inclusive culture in cybersecurity

Despite meaningful progress, the cybersecurity and IT industries continue to face significant challenges in creating truly inclusive environments. Unconscious bias remains a pervasive issue, often influencing hiring, evaluation, and promotion processes, which can disadvantage women and other underrepresented groups. Retention is another ongoing challenge, as many organizations struggle to cultivate workplace cultures that are welcoming and supportive enough to retain diverse talent long-term. Barriers to entry and advancement persist, highlighting the need for continuous improvement and active intervention. While the industry has made strides in recognizing the importance of diversity, achieving full representation and inclusivity requires sustained commitment and effort. The current focus on diversity is encouraging, but only through consistent attention and action will the industry overcome these longstanding challenges and ensure a more equitable future. ... Work-life balance is another significant issue, particularly in cultures where traditional gender roles are still prevalent. Women often face greater expectations regarding balancing work and family, which can impact their career trajectory, especially in environments that lack flexible work arrangements. 


5 ways to achieve AI transformation that works for your business

"Never work in a silo and prepare to be wrong in terms of how you've set the technology up." Kollnig and her colleagues have implemented the Freshworks Customer Service Suite, an omnichannel support software with AI-powered chatbots and ticketing. She told ZDNET that working closely with the technology partner has helped her team to deliver a successful AI transformation. "So, for one of our AI projects, we established our basic set-up and said, 'Freshworks, come in and audit it. Tell us, are we doing this right? Would you do it differently?'" she said. ... Moyes said professionals in all sectors should take some sensible steps, including working with people who know more about AI. "Within every organization, there are groups of technology leads who are interested and want to innovate, evolve, and push," he said. "Lean on them. Learn from those at the coal face who want to do AI. There are no guarantees that the technologies you introduce will be the next best thing, but at least you'll be aware of the potential." Moyes said SimpsonHaugh is looking at how AI can reduce time-intensive tasks, such as summarizing text, and help staff find images to create early-stage design proposals.


What Does Enterprise-Wide Cybersecurity Culture Look Like?

Whoever is championing enterprise-wide security needs to secure buy-in from everyone within an organization. At the top, that means getting the C-suite and board to throw their weight behind security. “At the end of the day, if you don't have the CEO on board and the CEO isn't … voicing the same level of prioritization, then it will be something that's viewed as a half step back from … fundamental business priorities,” Cannava warns. Effective communication is a big part of getting that buy-in from leadership. How can security leaders explain to their boards and fellow executives that security is an essential business enabler? “Really [convert] the technology language or cyber language or jargon into how will … that risk potential impact revenue or reputation or our compliance?” says Landen. Tabletop exercises can be a powerful way to not just tell but show executives the value of cybersecurity. Walking through various cybersecurity incident scenarios can demonstrate the vital connection security has to operations and business outcomes. Ping Identity periodically engages multiple members of the C-suite in these exercises. “Not only do you know learn what the gap is, you also learn by doing … you're pulled in and engaged as a member of the C-suite, and now you're invested,” he says.



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe

Daily Tech Digest - November 25, 2024

GitHub Copilot can make inline code suggestions in several ways. Give it a good descriptive function name, and it will generate a working function at least some of the time—less often if it doesn’t have much context to draw on, more often if it has a lot of similar code to use from your open files or from its training corpus. ... Test generation is generally easier to automate than initial code generation. GitHub Copilot will often generate a reasonably good suite of unit tests on the first or second try from a vague comment that includes the word “tests,” especially if you have an existing test suite open elsewhere in the editor. It will usually take your hints about additional unit tests, as well, although you might notice a lot of repetitive code that really should be refactored. Refactoring often works better in Copilot Chat. Copilot can also generate integration tests, but you may have to give it hints about the scope, mocks, specific functions to test, and the verification you need. ... GitHub Copilot Code Reviews can review your code in two ways, and provide feedback. One way is to review your highlighted code selection (Visual Studio Code only, open public preview, any program­ming language), and the other is to more deeply review all your changes. Deep reviews can use custom coding guidelines.


Closed loop optimisation: Opening a world of advantages for marketers

In marketing, closed loop optimisation refers to the collection and analysis of various data across the marketing lifecycle or customer journey to create a continuous cycle of learning and data-led decision-making. By closing the customer journey loop, starting with the first interaction all the way to “post-sale”, brand marketers can evaluate the effectiveness of advertising campaigns and channels, and deploy their resources in initiatives that deliver the best outcomes. ... With advanced analytics solutions, marketing organisations can process structured and unstructured data from internal and external sources to identify emerging trends, customer needs and behaviours, and other metrics that can inform brand strategies. When a health technology company understood with the help of analytics that user-generated content was a key factor in strengthening interactions with customers, it changed the content strategy to include user feedback, and thereby fostered a sense of community, improved credibility, and elevated the brand experience to substantially increase social media engagement within eighteen months. A top U.S. professional basketball team used predictive analytics to uncover new trends and understand the type of content that would resonate best with fans around the world.


The rise of autonomous enterprises: how robotics, AI, and automation are reshaping the workforce of tomorrow

An autonomous enterprise is an organisation that has successfully implemented the best application of automation technologies to function with minimal human intervention in most aspects. From routine administrative tasks to complex decision-making processes, autonomous enterprises leverage AI, ML, and RPA to drive efficiency, accuracy, and agility. Companies across sectors such as manufacturing, healthcare, logistics, and more, are looking towards automation to streamline operations, reduce costs, and innovate. ... As human-machine collaboration grows, there is an increasing need for employers and educational institutions to address reskilling and upskilling to prepare the workforce in continuously changing labour markets. This does not mean this work will eliminate human jobs but will definitely require more creativity, critical thinking, and emotional intelligence among human employees—the very qualities AI cannot encapsulate. ... As Robotics and AI continue to revolutionise the world the ethical and governance challenges arising from it have to be responded, proactively and thoughtfully. Privacy, bias, and accountability issues have to be strongly addressed so that these technologies are developed and deployed appropriately. 


Overcoming legal and organizational challenges in ethical hacking

A professional ethical hacker must have a broad understanding of various IT systems, networking, and protocols – essentially, a deep “under the hood” knowledge. This foundational expertise allows them to navigate different environments effectively. Additionally, target-specific knowledge is crucial, as the security measures and vulnerabilities can vary significantly based on the technology stack in use. ... AI and machine learning can significantly enhance ethical hacking efforts. On the offensive side, automated processes supported by AI can efficiently identify vulnerabilities and suggest areas for further manual security testing. This streamlines the initial phases of penetration testing and helps uncover potential issues more effectively. Additionally, AI can assist in generating detailed penetration testing reports, saving time and ensuring accuracy. On the defensive side, AI and machine learning are invaluable for detecting anomalies and correlating data to identify potential threats. These technologies enable a proactive approach to cybersecurity, enhancing both offensive and defensive strategies. By using AI and machine learning, ethical hackers can improve their effectiveness. 


Why The Gig Economy Is A Key Target For API Attacks

One of the most difficult attacks to prevent is business logic abuse. Strictly speaking, it isn’t an attack at all. Business logic abuse sees the functionality of the API used against it, so that a task it is supposed to execute is then used to carry out an attack. It might be use to subvert access control, for instance, with attackers manipulating URLs, session tokens, cookies, or hidden fields to gain advanced privileges and access sensitive data or functionality. Or bots may attempt to repeatedly sign up, login, or execute purchases in order to validate credentials, access unauthorised data, or commit fraud. Perhaps flaws in session tokens or poor handling of session data allows the attacker to hijack sessions and escalate privileges. Or the attacker may try to bypass built-in constraints to business logic by reviewing points of entry such as form fields and coming up with inputs that the developers may not have planned for. ... Legacy app defences rely on embedding javascript code into end-user applications and devices, which slows deployment and leaves platforms vulnerable to reverse engineering. Some of this code, such as CAPTCHAs, also introduces customer friction. 


From Contractors to OAuth: Emerging SDLC Threats for 2025

Outsourcing software development is common practice but opens the door to significant security risks when not properly managed. These outsourced operations lack the same stringent security measures applied to internal teams, creating blind spots that attackers can easily leverage. A common vulnerability in this scenario is the over-provisioning of access rights. ... Poorly configured CI/CD pipelines are another critical weakness. When organizations outsource software development, they often have little visibility into the security practices of their contractors’ environments. Attackers can exploit poorly configured pipelines to access source code or manipulate software delivery processes. ... Preventing OAuth phishing can be difficult because it exploits user behavior rather than traditional technical vulnerabilities. While phishing training is essential, the best defense is limiting the damage attackers can cause if they gain access. By restricting developer entitlements to only what is necessary for their role, organizations can reduce the impact of a compromised account and prevent broader system breaches. ... The most catastrophic SDLC security breaches in 2025 may not stem from technical vulnerabilities but from poorly managed development teams.


In a Growing Threat Landscape, Companies Must do Three Things to Get Serious About Cybersecurity

From a practical standpoint, execs and the board make budget decisions about every domain, including security. Unlike other domains, cybersecurity isn’t a profit center for most businesses, so it often gets underfunded compared to business units and projects that generate revenue. That’s a problem. If executives understand how much is at stake from a fundamental business level, they will invest in bolstering their cybersecurity posture. Cybersecurity is essential to protecting profit centers and enabling them to safely grow. And more and more, customers are looking at a company’s security bonafide when making their buying decisions. It’s in the execs’ self-interest to take charge in adopting a cybersecurity posture as they will ultimately be held accountable in the event of catastrophe. ... It’s also essential to have an honest, objective CISO at the helm of cybersecurity who has power at the executive table. The C-suite and board won’t ever know how to effectively prioritize security unless they have a CISO guiding them accordingly. Communication is central here. There has to be open discussion between the CISO and the rest of the C-suite regularly. 


Perimeter Security Is at the Forefront of Industry 4.0 Revolution

Perimeter security is crucial for military, government organizations and other business enterprises alike to detect potential threats, deter the possible intruders, and delay the illegal attempts which the intruders make while breaching in a secured area/perimeter. Additionally, perimeter security maintains the operational continuity within these organizations. To prevent unauthorized entry to the premises, high-security associations, commercial centers, government facilities and other organizations can establish a physical barrier utilizing detection and deterrence techniques.... The effectiveness of the perimeter security system depends upon several factors such as design and implementation of the security measures, proper integration of physical and electronic devices and expertise of a well-trained personnel. A well-designed perimeter security system should provide a comprehensive coverage of any building/premise with multiple layers of security which can be effective against intruders/thieves in creating obstacles. Regular maintenance and testing of the perimeter security system is necessary to ensure their continued efficiency. It is critical to continuously assess and expand perimeter security measures in order to counter different types of threats and hazards.


5 Trends Reshaping the Data Landscape

Before companies can successfully leverage AI and advanced analytics, it’s urgent to address the “runaway data movement and data pipeline challenges that are so common in enterprises,” he pointed out. “When you think about data movement and data pipelines, most customers have transactional systems or legacy environments that then feed data to downstream systems. Or they’re getting a firehose of data from a variety of sources that are coming from the cloud, and they can be batch or streaming data.” What happens is these organizations “take that data and transform or consume it by multiple business units using their own extract, transform, and load (ETL) solutions,” he illustrated. “They can be completely different types of data. This is typically the first kind of deviation or loss of a unified source of truth for the data.” The ETL solutions that each group manages “have their own user acceptance testing or production environments, which means more copies of data,” he pointed out. “Then that data is fed to multiple systems, maybe for dashboarding or for more low-latency analytics. But it’s also fed to their systems, like OLAP systems or data lakes.” If a data team “can’t get the data where it needs to go, they’re not going to be able to analyze it in an efficient, secure way,” he said.


Top challenges holding back CISOs’ agendas

With limited resources and an ever-growing list of threats, CISOs are often caught managing multiple projects at once. Some of these might move forward bit by bit, but without clear milestones or measurable progress, it’s difficult to show their real impact. This makes it harder for CISOs to secure extra funding or support, especially when stakeholders can’t see solid, tangible results. “That makes it almost impossible to show meaningful success,” says John Terrill, CSO at Phosphorus. “A lot of times, this can come from trying to boil the ocean.” Many CISOs recommend learning to “speak business” and occasionally scaring the board to get more funding, but these can only go so far. “The company has a finite amount of resources; you need to make peace with that,” Avivi says. ... “Aligning both the workforce and the organization’s leadership around risk appetite helps tremendously to focus your energy and your dollars in the places that most need them,” says Ken Deitz, CISO at Secureworks. “If an organization has a stated risk appetite for security risk, the priorities start to jump off the page.” CISOs should be open about the risk the organization will take if their priorities are not addressed. 



Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman

Daily Tech Digest - November 24, 2024

AI agents are unlike any technology ever

“Reasoning” and “acting” (often implemented using the ReACT — Reasoning and Acting) framework) are key differences between AI chatbots and AI agents. But what’s really different is the “acting” part. If the main agent LLM decides that it needs more information, some kind of calculation, or something else outside the scope of the LLM itself, it can choose to solve its problem using web searches, database queries, calculations, code execution, APIs, and specialized programs. ... Since the dawn of computing, the users who used software were human beings. With agents, for the first time ever, the software is also a user who uses software. Many of the software tools agents use are regular websites and applications designed for people. They’ll look at your screen, use your mouse to point and click, switch between windows and applications, open a browser on your desktop, and surf the web — in fact, all these abilities exist in Anthropic’s “Computer Use” feature. Other tools that the agent can access are designed exclusively for agent use. Because agents can access software tools, they’re more useful, modular, and adaptable. Instead of training an LLM from scratch, or cobbling together some automation process, you can instead provide the tools the agent needs and just let the LLM figure out how to achieve the task at hand.


Live On the Edge

Why live on the edge now? Because, despite public cloud usage being ubiquitous, many deployments are ad hoc and poorly implemented. “The focus of refactoring cloud infrastructure should be on optimizing costs by eliminating redundant, overbuilt or unused cloud infrastructure,” says Gartner. ... Can edge computing also benefit the environment? Yes, according to a study by IBM Corp. “One direct way is by using edge computing to monitor protected species of wildlife inhabiting remote places,” IBM says. “Edge computing can help wildlife officials and park rangers identify and stop poaching activities, sometimes before these offenses even occur.” Another relates to energy management. “Edge computing supports the use of smart grids, which can deliver energy more efficiently and help businesses leave a smaller carbon footprint,” IBM notes. “Grid or distributed computing is where a group of machines and networks work together for a common computing purpose. Resources are utilized in an optimized manner, thus reducing the amount of waste that can occur when large quantities of power are consumed.” More significantly, edge computing can also support the remote monitoring of oil and gas assets. 


Getting started with AI agents (part 1): Capturing processes, roles and connections

An organizational chart might be a good place to start, but I would suggest starting with workflows, as the same people within an organization tend to act with different processes and people depending on workflows. There are available tools that use AI to help identify workflows, or you can build your own gen AI model. I’ve built one as a GPT which takes the description of a domain or a company name and produces an agent network definition. Because I’m utilizing a multi-agent framework built in-house at my company, the GPT produces the network as a Hocon file, but it should be clear from the generated files what the roles and responsibilities of each agent are and what other agents it is connected to. Note that we want to make sure that the agent network is a directed acyclic graph (DAG). This means that no agent can simultaneously become down-chain and up-chain to any other agent, whether directly or indirectly. This greatly reduces the chances that queries in the agent network fall into a tailspin. In the examples outlined here, all agents are LLM-based. If a node in the multi-agent organization can have zero autonomy, then that agent paired with its human counterpart, should run everything by the human. 


Preparing Project Managers for an AI-Driven Future

Right now, about 95% of AI conversations are around tools that help people do their jobs better, like ChatGPT or other large language models. For most project managers, AI can be a huge timesaver. Think of it as a tool that takes on repetitive tasks—like summarizing meeting notes or helping with scheduling—so you can focus on higher-value work. ... AI can free you up to focus on the strategic parts of your job. It’s not here to replace project managers; it’s here to make them more efficient. At this moment, a lot of people are using AI from a personal or group productivity perspective. But they are increasingly going to depend on AI as part of their team. You’re already managing more AI than you might think. And in the future, you’ll be managing a lot more. Some things will be done by people and some things will be done by machines and we need to make sure the whole thing is happening in a totally planned way. ... First thing to understand is that AI projects are data projects. If you’re used to traditional software projects, where functionality is front and center, AI is different. AI relies on data quality—"garbage in, garbage out,” as they say. Your primary focus needs to be on getting the right data in and managing the outputs, which are data as well.


Making quantum computing accessible through decentralization

A decentralized model for quantum computing sidesteps many of these challenges. Rather than relying on centralized hardware-intensive setups, it distributes computational tasks across a global network of nodes. This approach taps into existing resources—standard GPUs, laptops, and servers—without needing the extreme cooling or complex facilities required by traditional quantum hardware. Instead, this decentralized network forms a collective computational resource capable of solving real-world problems at scale using quantum techniques. This decentralized Quantum-as-a-Service approach emulates the behaviors of quantum systems without strict hardware demands. By decentralizing the computational load, these networks achieve a comparable level of efficiency and speed to traditional quantum systems—without the same logistical and financial constraints. ... Decentralized quantum computing represents a transformative shift in how we approach advanced problem-solving. By leveraging accessible infrastructure and distributing tasks across a global network, powerful computing is brought within reach of many who were previously excluded. 


Data Security vs. Cyber Security – Why the Difference Matters

Cybersecurity is the practice of safeguarding digital systems, networks, and programs from attacks that aim to steal, alter, or destroy sensitive data, extort money through ransomware, or disrupt business operations. Despite a substantial $183 billion investment in traditional security measures in 2023 and projections indicating a 14% increase in these security budgets for 2024, data breaches surged by 78%, reaching a record high. ... Data is the most valuable commodity of a company, yet we don’t see resource allocation and time investment in data security reflecting this importance. Data security involves protecting the data itself. Once protected, the data can travel anywhere and remain protected. Having the fine granularity to safeguard the data allows you to grant users the minimum access necessary for their job functions. When someone does need to use the data, they must be authorized to do so. ... Zero trust data protection techniques significantly enhance data security posture and business value. The first step to improving security and data value is identifying the most at-risk yet least accessed data. It’s essential to assess the need for clear-text visibility of high-risk data across people, processes, and systems and to consider the business impact of minimizing this risk, including factors like regulatory compliance, reputation, and insurance.


Is Your Phone Spying On You? How to Check and What to Do

“For years, people have noticed advertisements for products they recently discussed in conversation — even without searching for them online — suddenly appear on their devices. While many dismissed this as a coincidence or attributed it to targeted advertising based on online searches, it turns out there’s more to the story. According to a report by 404 Media, a marketing firm has confirmed that smartphones are not just tracking users' online activity — they are also listening to what you say out loud, near your phone. “Smartphones might indeed be listening to our conversations, thanks to a technology known as “active listening.” This unsettling discovery comes after a marketing firm, whose clients include tech giants like Google and Facebook, admitted to using software that monitors users’ conversations through the microphones of their devices. This admission has raised serious questions about privacy, user consent, and the ethics of targeted advertising. … For better or for worse, there is generally nothing illegal about using audio information to target advertising. While it is obviously illegal to spy on someone without their consent, most phone users have given their permission for this practice without knowing, according to legal experts.


CNCF Brings Jaegar and OpenTelemetry Closer Together to Improve Observability

In the wake of adding support for OpenTelemetry, the project is now working on revamping the user interface for Jaegar to make that data more easily discoverable in addition to normalizing dependency views. In addition, the project is moving toward adding support for the Storage v2 interface to consume OpenTelemetry data natively along with adding support for ClickHouse as the official storage backend for tracing data. Finally, the project intends to add support for Helm Charts and an Operator that will make deploying Jaegar on Kubernetes clusters simpler. ... The challenge, of course, has been first finding the funding for observability initiatives, followed then by the issues that arise as DevOps teams move to consolidate tooling. Many software engineers naturally become attached to a particular monitoring tool. Convincing them to swap it out for another platform requires effort and, most importantly, training. Each organization will individually decide to what degree they may want to drive tool consolidation, however, in many cases, the cost of acquiring an observability platform assumes savings will be generated by eliminating the need for other tools.


Zero Days Top Cybersecurity Agencies' Most-Exploited List

The prevalence of zero-day vulnerabilities on this year's list is a reminder that attackers regularly seek ways of exploiting widely used types of software and hardware before vendors identify the underlying flaw and fix it. The joint security advisory also details guidance prepared by CISA and the National Institute of Standards and Technology designed to improve organizations' cyber resilience to better combat all types of cybersecurity threats. Specific recommendations also include regularly using automated asset discovery to find all of the hardware, software, systems and services inside an IT organization's estate and locking them down as much as possible; prepping and testing incident response plans; and keeping regular, secure backups of copies which get stored off-network to facilitate rapid repair and restoration of systems. The guidance also recommends implementing zero trust network architecture, using phishing-resistant multifactor authentication as an identity and access management control, enforcing least-privileged access, and reducing the number of third-party applications and unique types of builds used.


Achieving Optimal Outcomes in Security Through Platformization

Platformization unifies multiple solutions and services into a single architecture with a shared data store and streamlined management. With native integrations, each component becomes more powerful than standalone products. This approach helps increase productivity, simplify operations, and extract the most value from data, all leading to better security outcomes and greater efficiency. ... Using the platform approach should never entail giving up security efficacy for the sake of vendor consolidation or simplified management. If there is a corresponding set of point products in a given area, the minimum bar by which the “platform” component must be measured is the very best of those individual tools. Flexibility and scalability are important. A platform needs to empower your company to gradually grow into using it. A total “rip and replace” of multiple security tools at once is far more complex than most enterprises are willing to attempt. It’s even harder when you factor in the differing replacement cycles of existing solutions. You need the option to adopt the platform piece by piece or all at once – whichever suits your organization best – while retaining the ability to cover all your security bases.



Quote for the day:

“Opportunities don’t happen, you create them.” -- Chris Grosser