Daily Tech Digest - November 30, 2024

API Mocking Is Essential to Effective Change Management

A constant baseline is essential when managing API updates. Without it, teams risk diverging from the API’s intended design, resulting in more drift and potentially disruptive breaking changes. API mocks serve as a baseline by accurately simulating the API’s intended behavior and data formats. This enables development and quality assurance teams to compare proposed changes to a standardized benchmark, ensuring that new features or upgrades adhere to the API’s specified architecture before deployment. ... A centralized mocking environment is helpful for teams who have to manage changes over time and monitor API versions. Teams create a transparent, trusted source of truth from a centralized environment where all stakeholders may access the mock API, which forms the basis of version control and change tracking. By making every team operate from the same baseline in keeping with the desired API behavior and structure, this centralized approach helps reduce drift. ... Teams that want to properly use API mocking in change management must include mocking techniques in their daily development processes. These techniques ensure that the API’s documented specifications, implementation and testing environments remain in line, lowering the risk of drift and supporting consistent, open updates.


How Open-Source BI Tools Are Transforming DevOps Pipelines

BI tools automate the tracking of all DevOps processes so one can easily visualize, analyze, and interpret the key metrics. Rather than manually monitoring the metrics, such as the percentage ratio of successfully deployed applications or the time taken to deploy an application, one is now able to simply rely on BI to spot such trends in the first place. This gives one the ability to operationalize insights which saves time and ensures that pipelines are well managed. ... If you are looking for an easy-to-use tool, Metabase is the best option available. It allows you to build dashboards and query databases without the need to write elaborate codes. It also allows the user to retrieve data from a variety of systems, which, from a business perspective, allows a user to measure KPIs, for example, deployment frequency or the occurrence of system-related problems. ... If you have big resources that need monitoring, Superset is perfect. Superset was designed with the concept of big data loads in mind, offering advanced visualization and projection technology for different data storage devices. Businesses with medium-complexity operational structures optimize the usage of Superset thanks to its state-of-the-art data manipulation abilities. 


Inside threats: How can companies improve their cyber hygiene?

Reflecting on the disconnect between IT and end users, Dyer says that there will “always be a disparity between the two classes of employees”. “IT is a core fundamental dependency to allow end users to perform their roles to the best of their ability – delivered as a service for which they consume as customers,” he says. “Users wish to achieve and excel in their employment, and restrictions of IT can be a negative detractor in doing so. He adds that users are seldom consciously trying to compromise the security of an organisation, and that the incompetence in security hygiene is due to a lack of investment, awareness, engagement or reinforcement. “It is the job of IT leaders to bridge that gap [and] partner with their respective peers to build a positive security awareness culture where employees feel empowered to speak up if something doesn’t look right and to believe in the mission of effectively securing the organisation from the evolving world of outside and inside threats.” And to build that culture, Dyer has some advice, such as making policies clearly defined and user-friendly, allowing employees to do their jobs using tech to the best of their ability (with an understanding of the guardrails they have) and instructing them on what to do should something suspicious happen.


Navigating Responsible AI in the FinTech Landscape

Cross-functional collaboration is critical to successful, responsible AI implementation. This requires the engagement of multiple departments, including security, compliance, legal, and AI governance teams, to collectively reassess and reinforce risk management strategies within the AI landscape. Bringing together these diverse teams allows for a more comprehensive understanding of risks and safeguards across departments, contributing to a well-rounded approach to AI governance. A practical way to ensure effective oversight and foster this collaboration is by establishing an AI review board composed of representatives from each key function. This board would serve as a centralized body for overseeing AI policy adherence, compliance, and ethical considerations, ensuring that all aspects of AI risk are addressed cohesively and transparently. Organizations should also focus on creating realistic and streamlined processes for responsible AI use, balancing regulatory requirements with operational feasibility. While it may be tempting to establish one consistent process, for instance, where conformity assessments would be generated for every AI system, this would lead to a significant delay in time to value. Instead, companies should carefully evaluate the value vs. effort of the systems, including any regulatory documentation, before proceeding toward production.


The Future Of IT Leadership: Lessons From INTERPOL

Cyber threats never keep still. The same can be said of the challenges IT leaders face. Historically, IT functions were reactive—fixing problems as they arose. Today, that approach is no longer sufficient. IT leaders must anticipate challenges before they materialise. This proactive stance involves harnessing the power of data, artificial intelligence (AI), and predictive analytics. It is by analysing trends and identifying vulnerabilities that IT leaders can prevent disruptions and position their organisations to respond effectively to emerging risks. This shift from reactive to predictive leadership is essential for navigating the complexities of digital transformation. ... Cybercrime doesn’t respect boundaries, and neither should IT leadership. Successful cybersecurity efforts often rely on partnerships—between businesses, governments, and international organisations. INTERPOL’s Africa Cyber Surge operations demonstrate the power of collaboration in tackling threats at scale. An IT leader needs to adopt a similar mindset by building networks of trust across industries, government agencies, and even with and through competitors. It can help create shared defences against common threats. Besides, collaboration isn’t limited to external partnerships. 


4 prerequisites for IT leaders to navigate today’s era of disruption

IT leaders aren’t just tech wizards, but savvy data merchants. Imagine yourself as a store owner, but instead of shelves stocked with physical goods, your inventory consists of valuable data, insights, and AI/ML products. To succeed, they need to make their data products appealing by understanding customer needs, ensuring products are current, of a high-quality, and organized. Offering value-added services on top of data, like analysis and consulting, can further enhance the appeal. By adopting this mindset and applying business principles, IT leaders can unlock new revenue streams. ... With AI becoming more pervasive, the ethical and responsible use of it is paramount. Leaders must ensure that data governance policies are in place to mitigate risks of bias or discrimination, especially when AI models are trained on biased datasets. Transparency is key in AI, as it builds trust and empowers stakeholders to understand and challenge AI-generated insights. By building a program on the existing foundation of culture, structure, and governance, IT leaders can navigate the complexities of AI while upholding ethical standards and fostering innovation. ... IT leaders need to maintain a balance of intellectual (IQ) and emotional (EQ) intelligence to manage an AI-infused workplace. 


How to Build a Strong and Resilient IT Bench

Since talent is likely to be short in new technology areas and in older tech areas that must still be supported, CIOs should consider a two-pronged approach that develops bench strength talent for new technologies while also ensuring that older infrastructure technologies have talent waiting in the wings. ... Companies that partner with universities and community colleges in their local areas have found a natural synergy with these institutions, which want to ensure that what they teach is relevant to the workplace. This synergy consists of companies offering input for computer science and IT courses and also providing guest lecturers for classes. Those companies bring “real world” IT problems into student labs and offer internships for course credit that enable students to work in company IT departments with an IT staff mentor. ... It’s great to send people to seminars and certification programs, but unless they immediately apply what they learned to an IT project, they’ll soon forget it. Mindful of this, we immediately placed newly trained staff on actual IT projects so they could apply what they learned. Sometimes a more experienced staff member had to mentor them, but it was worth it. Confidence and competence built quickly.


The Growing Quantum Threat to Enterprise Data: What Next?

One of the most significant implications of quantum computing for cybersecurity is its potential to break widely used encryption algorithms. Many of the encryption systems that safeguard sensitive enterprise data today rely on the computational difficulty of certain mathematical problems, such as factoring large numbers or solving discrete logarithms. Classical computers would take an impractical amount of time to crack these encryption schemes, but quantum computers could theoretically solve these problems in a matter of seconds, rendering many of today's security protocols obsolete. ... Recognizing the urgent need to address the quantum threat, the National Institute of Standards and Technology launched a multi-phase effort to develop post-quantum cryptographic standards. After eight years of rigorous research and relentless effort, NIST released the first set of finalized post-quantum encryption standards on Aug. 13. These standards aim to provide a clear and practical framework for organizations seeking to transition to quantum-safe cryptography. The final selection included algorithms for both public-key encryption and digital signatures, two of the most critical components of modern cybersecurity systems.


Are we worse at cloud computing than 10 years ago?

Rapid advancements in cloud technologies combined with mounting pressures for digital transformation have led organizations to hastily adopt cloud solutions without establishing the necessary foundations for success. This is especially common if companies migrate to infrastructure as a service without adequate modernization, which can increase costs and technical debt. ... The growing pressure to adopt AI and generative AI technologies further complicates the situation and adds another layer of complexity. Organizations are caught between the need to move quickly and the requirement for careful, strategic implementation. ... Include thorough application assessment, dependency mapping, and detailed modeling of the total cost of ownership before migration begins. Success metrics must be clearly defined from the outset. ... When it comes to modernization, organizations must consider the appropriate refactoring and cloud-native development based on business value rather than novelty. The overarching goal is to approach cloud adoption as a strategic transformation. We must stop looking at this as a migration from one type of technology to another. Cloud computing and AI will work best when business objectives drive technology decisions rather than the other way around.
A well-structured Data Operating Model integrates data efforts within business units, ensuring alignment with actual business needs. I’ve seen how a "Hub and Spoke" model, which places central governance at the core while embedding data professionals in individual business units, can break down silos. This alignment ensures that data solutions are built to drive specific business outcomes rather than operating in isolation. ... Data leaders must ruthlessly prioritize initiatives that deliver tangible business outcomes. It’s easy to get caught up in hype cycles—whether it’s the latest AI model or a cutting-edge data governance framework—but real success lies in identifying the use cases that have a direct line of sight to revenue or cost savings. ... A common mistake I’ve seen in organizations is focusing too much on static reports or dashboards. The real value comes when data becomes actionable — when it’s integrated into decision-making processes and products. ... Being "data-driven" has become a dangerous buzzword. Overemphasizing data can lead to analysis paralysis. The true measure of success is not how much data you have or how many dashboards you create but the value you deliver to the business. 



Quote for the day:

"Efficiency is doing the thing right. Effectiveness is doing the right thing." -- Peter F. Drucker

Daily Tech Digest - November 29, 2024

The Human Side of Digital Transformation

Digital transformation often stirs feelings of fear and uncertainty, particularly among employees worried about job security or their capacity to embrace new technologies. Leaders should prioritize active listening and demonstrate genuine empathy towards their concerns. This goes beyond merely recognizing their fears; it involves grasping their viewpoints and discovering solutions to alleviate their concerns. ... When employees feel excluded from the digital transformation journey, the initiative is likely to falter. Effective leaders understand that cultivating open communication is crucial for achieving success. Regular meetings, town halls, and feedback loops can create an environment where employees feel valued and actively participate in the decision making process. When leaders nurture an open environment that invites employees to share concerns and ideas, it enhances engagement and alleviates fears of uncertainty. ... In situations like these, employees frequently venture beyond their comfort zones to embrace new practices and technologies. Acknowledging their contributions, whether significant or minor, can greatly enhance morale and engagement. Leaders with emotional intelligence express gratitude, whether through verbal recognition, presenting awards, or providing new growth opportunities.


Breaking the cloud backup ‘black box’ with intelligent data mapping and retrieval

In this massive, fast-moving ecosystem, creating a cloud backup becomes quite a task. First, one has to cover an endless, rapidly growing wave of cloud assets, from every active application and database to resources that have been shut down or moved. Then, after identifying the resources, they have to manually tag them with metadata labels (key-value pairs for easier organization and filtering) and create snapshots. These are point-in-time backups that can be configured with varying retention periods, allowing users to restore the assets at any given time within the retention window. Over the years, these snapshots have evolved, providing enterprises with capabilities like automation (after initial configuration) and encryption. ... At the core, Eon creates snapshots by automating resource mapping, classification and policy association. It keeps on scanning cloud resources on an ongoing basis, automatically mapping and classifying them based on environment type—whether production, development, staging, or QA—and data sensitivity, such as personally identifiable, health or financial information. Once the resources are mapped, it applies customized backup retention policies in line with the enterprise’s specific business and compliance requirements. This ensures the data is not under or over-backed up.


It’s Time to Fight For Personal Data Rights

At a time when massive companies fall victim to devastating hacks with alarming frequency, the Change Healthcare hack stands out not just for its size but also for the ways in which it highlights all that's broken with how we protect people's personal information. ... It's imperative that federal legislators take the opportunity to tackle this urgent vulnerability in society. To satisfy the cybercriminals that extorted it, Change Healthcare paid a $22 million ransom, yet sensitive data was still leaked onto the dark web and is probably still available there. Meanwhile, my family was not notified that we were victims until seven months after the hack occurred. Unlike the other data hacks for which I had a clear customer or employment relationship with the hacked organization, I did not know what Change Healthcare was. To my knowledge, I had never dealt with Change Healthcare before. ... How did this company, which I had never heard of before, have two of our most sensitive types of personal data? And why were we informed over seven months after the fact, during which time criminals could have been selling our data on the dark web or actively using them to steal our identities?


Hard drives at the heart of AI innovation

Hard drives are indispensable in AI storage clusters due to their scalability, cost-efficiency, and sustainability. Hard drives store 90 percent of exabytes in the world’s largest data centers. Additionally, 90 percent of enterprise exabyte workloads include AI data retention, analytics, and backup, which require high capacity, yet relatively moderate data processing time. These workloads are best suited for hard drives, thanks to the drives’ cost-per-terabyte advantage. Recent advancements in hard drive areal density have enhanced both AI performance and energy efficiency. ... Proactive storage capacity planning and strategic sourcing for the long term is key for enterprises to unlock the full value of AI data. Winning businesses are the ones that can increase storage capacity dramatically while decreasing the total cost of ownership and resource impact. ... The future of AI hinges on storage. As AI drives innovation, data drives AI, and storage drives data. Advancing storage architecture will be critical in realizing AI’s promise, and empowering businesses to stay ahead in this transformative era. Emerging technologies such as DNA data storage and quantum storage hold the potential to revolutionize the way we store and access data, offering unprecedented capacity and speed.


Teaching The Board To Talk To CISOs

The Board is often a political arena, with various agendas at play. Without any knowledge of the other issues and priorities currently under discussion at Board level, and of the political struggles and personalities involved across the Boardroom table, no executive can be expected to articulate anything genuinely relevant to the Board. General cybersecurity knowledge and a sense of perspective around risks and threats can be brought in by external experts or non-executive directors, but the reality of the situation on the ground across the firm can only come from the CISO, and they can only put it into context for the Board if they are given a sense of context in the first place. This goes beyond a broad sense of alignment between cyber strategy and business strategy: It is about aligning cyber execution with business execution over time across the strategic lifecycle; a lifecycle that can be disrupted by mergers, acquisitions, the arrival of new executives at the top, new market opportunities, technological evolutions or global events. Board members and senior executives need to understand the essential nature of this alignment for the CISO – or any other executive – to provide them with input, answer their questions or address their concerns in a valuable and meaningful way.


Retrieval-Augmented Generation (RAG): Deep Dive into 25 Different Types of RAG

Corrective RAG functions as a real-time fact-checker designed to generate responses and validate them against reliable sources to minimize errors. Its architecture includes an error-detection module that identifies and corrects discrepancies in the generated response before delivery. ... Speculative RAG anticipates user needs by predicting queries and preparing relevant responses ahead of time. This forward-thinking approach analyzes user context and behavior to pre-fetch data, reducing response times and enhancing user experience. ... Agenetic RAG offers adaptability by evolving with the user over time learning preferences through repeated interactions. Unlike static systems, Agenetic RAG dynamically refines its database and retrieval processes, creating a personalized experience. ... Self-RAG is an autonomous architecture focused on continuous improvement. It evaluates the accuracy and relevance of its responses, iteratively refining its retrieval methods. ... Adaptive RAG excels at adjusting its responses based on real-time changes in user context or environmental factors. This flexibility allows it to maintain relevance even in dynamic scenarios.


Resistance to digital transformation change in financial services is to be expected

Digital transformation projects can bring about major changes within organisations which may meet inherent cultural resistance. An inability to change mindsets can still pose significant barriers to transformation success. Whilst we are seeing the insurance industry start to embrace and accelerate change, some parts are steeped in traditions and legacy models. Some colleagues can view implementing digital tools as a threat to their job, rather than an enabler or a way to enhance their ways of working. To overcome this mindset, insurance leaders need to make a concerted effort to demonstrate the tangible benefits of digital adoption. For instance, increased efficiency gives agents more time for the customer, to have innovative thoughts and for sharing ideas to improve customer service. At this stage communication is key to effectively demonstrate the vision, emphasising the benefits colleagues will gain from digital tools. We’ve seen some of this within developer communities for example, with the impact of GitHub Co-pilot on their jobs. It’s important to clearly explain the positive impact tools can have on a colleague’s role and how it can work alongside, not instead of, them to increase their efficiency and job satisfaction.


As Supply Chains Go Digital, Cybersecurity Must be Strongest Link

Cybercriminals increasingly view supply chains as lucrative targets, exploiting their interconnected nature to launch ransomware attacks, steal sensitive data or disrupt operations. A single weak link — be it a poorly secured vendor or an outdated system — can expose an entire network to vulnerabilities, and attacks often exploit third-party vendors who may lack robust security protocols. Citing the recent CrowdStrike cyber incident as an example, Regina Lewie, senior vice president and chief risk officer at Corporate One Federal Credit Union, told PYMNTS that the new threat landscape can be full of surprises for unprepared businesses. “Even without direct involvement, we had to react quickly to protect our members and maintain trust,” she said, noting that third-party risk management is a growing focus. ... “The barrier for entry has never been lower for threat actors,” Discover® Global Network Chief Information Security Officer Sunil Mallik told PYMNTS in July. And while regulations can help provide a baseline, proactive companies are going beyond that to build resilience into their system. “We’ve needed to rethink, from the ground up, how we architect security,” Mastercard Chief Product Officer Jorn Lambert told PYMNTS in October.


Open source spirit: elevating team collaboration and innovation

Open source foundations play a vital role in building communities, setting guidelines, and fostering collaboration. They sit in the middle as a “neutral body” since, open source thrives on collaboration rather than confrontation. This is where open source foundations come in—they help negotiate between the interests of maintainers and adopters and empower contribution. While some open source projects thrive without the backing of a formal foundation, a neutral body is often essential for nurturing a strong community and ensuring the long-term success of both the project and its technology stack. Without such an entity, it can be much more difficult to build and sustain a community, which in turn hampers the future development of the project. ... Playing a more active role in open source comes with some risks due to the chaotic nature of the space. Still, I believe that to be a leader in technology, it’s important to take these risks intentionally and benefit from shaping technology not just for your customers but also for everyone else. As explored, there are plenty of motivations for it. A question remains when more traditional companies that rely on software now and in the future to stay competitive will reach digital maturity to contribute to open source like digital natives. 


Flexible Chips: A Catalyst for Realizing the Internet of Everything (IoE)

Flexible integrated circuits based on thin-film technology don’t require the complex, high-temperature processes inherent in silicon chip fabrication. Instead, they use a simple spin-coating technique, where polyimide is applied to a glass carrier. This allows fabrication at much lower temperatures, which significantly reduces the use of energy, water, and harmful chemicals. In turn, this slashes carbon emissions while also cutting down on set-up costs and production timelines. Using this method, flexible chips can be produced in just four weeks. This speed opens new possibilities for innovation: designers no longer need to get it ‘right first time’, as the rapid cycle times enable on-the-fly design adjustments, and iterative improvements as requirements change. ... The ultra-low cost and reduced carbon footprint of flexible chips position them as ideal candidates to embed into everyday objects. These qualities make them perfectly suited for IoE applications, where they can generate vast amounts of data to fuel AI models, enabling greater efficiency and actionable insights at scale. In the fast-moving consumer goods (FMCG) sector, this connectivity could facilitate item-level product verification or one-tap consumer interactions, creating more personalized experiences. 



Quote for the day:

"Good content isn't about good storytelling. It's about telling a true story well." -- Ann Handley

Daily Tech Digest - November 28, 2024

Agentic AI: The Next Frontier for Enterprises

Agentic AI represents a significant leap forward. "These systems can perform complex reasoning, integrate with vast enterprise datasets and execute processes autonomously. For instance, a task like merging customer accounts, which traditionally required ticket creation and days of manual effort, can now be completed in seconds with agentic AI," said Arun Kumar Parameswaran ... Salesforce's Agentforce, unveiled at Dreamforce 2024, represents a significant milestone. Built on the company's Atlas reasoning engine and using models such as OpenAI's GPT-4 and Google's Gemini, Agentforce combines advanced AI with Salesforce's extensive ecosystem of customer engagement data. Agentforce marks the "third wave of AI," said Marc Benioff, CEO of Salesforce. He predicts a massive 1 billion AI agents by 2026. Unlike earlier waves, which focused on predictive analytics and conversational bots, this phase emphasizes intelligent agents capable of autonomous decision-making. Salesforce has amassed years of customer engagement data, workflows and metadata, making Agentforce a precision tool that understands and anticipates customer needs.


Get started with bootable containers and image mode for RHEL

Bootable containers, also provided as image mode for Red Hat Enterprise Linux, represent an innovation in merging containerization technology with full operating system deployment. At their core, bootable containers are OCI (Open Container Initiative) container images that contain a complete Linux system, including the kernel and hardware support. This approach has several characteristics, namely:Immutability: The entire system is treated as an immutable unit, reducing configuration drift and enhancing security (other than /etc and /var, all directories are mounted read-only once deployed on a physical or virtual machine). Atomic updates: System updates can be performed as atomic operations, simplifying rollbacks and ensuring system consistency. Standardized tooling: Leverages existing OCI container tools and workflows, reducing the learning curve for teams familiar with containerization, and the ability to design a complete OS environment using a Containerfile as a blueprint. This is a wonderful benefit for a variety of use cases, including edge computing and IoT devices (where consistent, easily updatable system images are crucial), as well as on general cloud-native infrastructure to enable infrastructure-as-code practices at the OS level.


Traditional EDR won't cut it: why you need zero trust endpoint security

The development of EDR tools was the next step in cyber resiliency after antivirus began falling behind in its ability to stop malware. The struggle began when the rate at which new malware was created and distributed far outweighed the rate at which they could be logged and prevented from causing harm. The most logical step to take was to develop a cybersecurity tool that could identify malware by actions taken, not just by code. ... cybercriminals are now using AI to streamline their malware generation process, creating malware at faster speeds and improving its ability to run without detection. Another crucial problem with traditional EDRs and other detection-based tools is that they do not act until the malware is already running in the environment, which leads them to fail customers and miss cyberattacks until it is already too late. ... With application allowlisting, you create a list of the applications and software you trust and need and block everything else from running. Allowlisting is a zero trust method of application control that prevents known and unknown threats from running on your devices, preventing cyberattacks, like ransomware, from detonating.


AI and the future of finance: How AI is empowering NBFCs

AI in Non-Banking Financial Companies can be used for one of the first applications – the evaluation of credit risk. Until now, lenders relied mainly on credit scoring models and legacy data on a client. However, such models often do not grasp the complexity of a person’s business’s financial profile, a common problem in countries with large informal economies. AI, on the other hand, can analyse large amounts of data, from historical transaction information to phone use and even social behaviour. AI algorithms are able to analyse this data at astonishing speed, recognising trends and yielding more precise forecasts about the borrower’s capability to pay back loans. This enables NBFCs to offer credit to a wider and more diverse client base, which ultimately drives financial inclusion. ... The function of AI extends beyond just providing transactional support. With the help of sophisticated machine-learning models, NBFCs are able to offer personalised financial products that are tailored to the financial behaviour of individual preferences, lifestyles, and conditions. ... By using advanced analytics and machine-learning models, NBFCs are able to identify new opportunities to grow. 


Achieving Success in the New Era of AI-Driven Data Management

AI-driven personalization is essential for companies looking to stand out in a competitive marketplace. By leveraging vast amounts of customer data, AI helps businesses create highly tailored experiences that adapt to individual user preferences, increasing engagement and loyalty. Recent research shows "that 81 percent of customers prefer companies that offer a personalized experience." ... AI-driven data analytics has significant ethical, privacy, and regulatory challenges. Ethical considerations, such as bias detection and mitigation, are necessary to ensure AI models provide fair and accurate outcomes. Implementing governance frameworks and transparency in AI decision-making builds trust by making algorithms' logic accessible and accountable, minimizing the risk of unintended discrimination in data-driven insights. Data privacy and security are equally critical. The increased use of techniques like differential privacy raises expectations of high privacy standards. Differential privacy adds carefully calibrated "noise" to data sets — random variations designed to prevent the re-identification of individuals while still allowing accurate aggregate insights. 


Riding the wave of digital transformation: Insights and lessons from Japan’s journey

Availability and accessibility of digital infrastructure is often inadequate in developing countries, preventing digital services from reaching everyone. Japan’s experience in this domain ranges from formulating national strategies for digital infrastructure development to providing affordable high-speed internet access, and to integrating and standardizing different systems. The key takeaway here is the importance of sustaining robust infrastructure investment over a period of time and providing room for digital system scalability and flexibility. ... With this in mind, Japan embraced innovative approaches to enhance people’s digital skills. Some cities like Kitakyushu are training staff to use minimal coding tools—software that allows them to design applications with simple codes— as well as providing other training on digital transformation to equip staff at various levels within local governments with relevant skills. ... Digital transformation relies on coordinated efforts: the Japanese central government established supportive policies and frameworks, while local governments translated these into actionable initiatives for public benefit. 


When Hackers Meet Tractors: Surprising Roles in IoT Security

IoT encompasses the billions of connected devices we use daily - everything from smart home gadgets to fitness trackers. IIoT focuses on industrial applications, such as manufacturing robots, energy grid systems and autonomous vehicles. While these technologies bring remarkable efficiencies, they also expand the potential attack surface for cybercriminals. Ransomware, data breaches, and system takeovers are no longer just concerns for tech companies - they’re threats to every industry that relies on connectivity. ... Breaking into IoT and IIoT cybersecurity may seem daunting, but the pathway is more accessible than you might think. Leverage transferable skills. Many professionals transition into IoT/IIoT roles by building on their existing cybersecurity expertise. For instance, knowledge of network security or ethical hacking can be adapted to these environments. It is also beneficial to pursue specialized certifications that can demonstrate your expertise and open doors in niche fields. ... GICSP is designed specifically for professionals working in industrial environments, such as manufacturing, energy, or transportation. It bridges the gap between IT, OT (Operational Technology), and IIoT, emphasizing the secure operation of industrial control systems.


How to Ensure Business Continuity for Banks and Financial Services

A business continuity plan is only as effective as the people behind it. Creating a culture of safety and preparedness throughout a financial services organization is key to a successful crisis response. Regular training sessions, disaster simulations, and frequent updates to the BCP keep teams ready and capable of responding efficiently. Facilities teams must have a clear understanding of their roles and responsibilities during a disruption. From decision-makers to on-the-ground personnel, each team member should know exactly what steps to take to restore operations. Clear protocols ensure that recovery efforts can be executed quickly, minimizing service interruptions and maintaining a seamless customer experience. Disasters may be inevitable, but with the right facilities management strategies in place, financial service companies can be well-prepared to respond effectively and ensure business continuity. From conducting risk assessments to leveraging technology and building strong vendor partnerships, proactive facilities management can be the difference between a rapid recovery and prolonged downtime. Now is the time to assess the current state of facilities, ensure teams are trained, and confirm that business continuity plans are robust. 


Enterprises Ill-prepared to Realize AI’s Potential

To build more AI infrastructure readiness, skilled talent will be key to overcoming a deficit in workers needed to maintain IT infrastructure, Patterson suggests. In fact, only 31% of companies believed their talent was in a “high state of readiness” to fully make use of AI. In addition, 24% of those surveyed did not believe their companies held enough talent to address the “growing demand for AI,” the Cisco report revealed. Expanding the AI talent pool will require forming a learning culture for innovation, he says. That includes talent development and forming clear career paths. Leadership feels the pressure to achieve AI readiness, but workers are hesitant to use AI, according to the Cisco AI readiness report. “While organizations face pressure from leadership to bring in AI, the disconnect is likely due to hesitancy among workers within the organization who must take steps to gain new skills for AI or fear AI taking over their jobs,” Patterson says. ... “If you can’t secure AI, you won’t be able to successfully deploy AI,” he says. Meanwhile, tech professionals should develop a holistic view of the infrastructure required to adopt AI while incorporating observability and security, according to Patterson. A holistic view of infrastructure will bring “easier operations, resiliency, and efficiency at scale,” Patterson says.


The Role of Edge-to-Cloud Infrastructure in Shaping Digital Transformation

Unlike the cloud transporting data to the cloud for processing, Edge infrastructure brings the distributed computing network closer to the users–and is powered by local, small computing power near the end- user and relies on the cloud only as a ‘director’ of operations. This Edge-to-cloud computing model allows IoT devices to stay small and affordable. It also allows localized computing power to expedite data processing across many applications without relying on high throughput and consistent connectivity to a cloud hyper-scale or other data center hundreds or thousands of miles away. ... The key to edge computing is handling sizeable amounts of data that IoT devices can produce in conjunction with the existing inbuilding systems that would be difficult, risky, or cost-prohibitive to supplant.  Given IoT devices and existing systems often provide raw and isolated data – IoT platforms consolidate, aggregate, and then analyze data in real-time, or farm it out to external tools in the cloud for specific needs (work order management, MOPs, etc.). The key here is not just about real-time context, given that IoT platforms provide a database of historical information – true actionable outcomes can be driven from data.  



Quote for the day:

"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch

Daily Tech Digest - November 27, 2024

Cybersecurity’s oversimplification problem: Seeing AI as a replacement for human agency

One clear solution to the problem of technology oversimplification is to tailor AI training and educational initiatives towards diverse endpoints. Research clearly demonstrates that know-how of the underlying functions of security professions has a real mediating effect on the excesses of encountering disruptive, unfamiliar conditions. The mediation of this effect by the oversimplification mentality, unfortunately, suggests that more is required. Specifically, discussion of the foundational functionality of AI systems needs to be married to as many diverse outcomes as possible to emphasize the dynamism of the technology. ... Naturally, one of the value propositions of studies like the one presented here is the ability for professionals to see the world as another kind of professional might. Whilst tabletop exercises are already a core tool of the cybersecurity profession, there are opportunities to incorporate comparative applications’ learning for AI using simple simulations. ... Finally, wherever possible, role rotation is of clear advantage to overcoming the issues illustrated herein. In testing, the diversity of career roles over and above career length played a similar role in mitigating the excesses of the impact of novel conditions on response priorities.


How to Create an Accurate IT Project Timeline

Building resilient project plans that can handle unforeseen, yet often inevitable changes, is key to ensuring timeline accuracy. "Understanding dependencies, identifying bottlenecks, and planning delivery around these constraints have shown to be important for timeline accuracy," Chandrasekar says. Project accuracy also depends on clear communication and tracking. "It's critical to consistently review timelines with your project team and stakeholders, making updates as new information is discovered," Naqib says. He adds that project timelines should be tracked with the support of a work management tool, such as SmartSheet or Jira, in order to measure progress and identify gaps. Yet even with perfect planning, unanticipated delays or changes may occur. Proper planning and communication are key to assuring timeline accuracy, says Anne Gee, director of delivery excellence for IT managed services at data and technology consulting firm Resultant. ... The best way to get a lagging timeline back on schedule is to work with your project team to identify the root cause, Naqib advises. "Then, you can work with your team and your greater organization to explore possible resolution accelerators that will keep your timeline on track."


Shaping the Future of AI Benchmarking – Trends & Challenges

AI benchmarking serves as a foundational tool for evaluating and advancing artificial intelligence systems. Its primary objectives address critical aspects of AI development, ensuring that models are efficient, effective, and aligned with real-world needs. ... Benchmarks provide valuable insights into a model’s limitations, serving as a roadmap for enhancement. For instance: Identifying Bottlenecks: If a model struggles with inference speed or accuracy on specific data types, benchmarks highlight these areas for targeted optimization. Algorithm Development: Benchmarks inspire innovation by exposing gaps in performance, encouraging the development of new algorithms or architectural designs. Data Quality Assessment: Poor performance on benchmarks may indicate issues with training data, prompting better preprocessing, augmentation, or dataset refinement techniques. ... AI benchmarking involves a systematic process to evaluate the performance of AI models using rigorous methodologies. These methodologies ensure that assessments are fair, consistent, and meaningful, enabling stakeholders to make informed decisions about model performance and applicability.


Why data is the hottest commodity in cybersecurity

“The value of data has skyrocketed in recent years, transforming it into one of the most sought-after commodities in the digital age. The rise of AI and machine learning has only amplified the threat to data, as attackers can now automate their efforts and create more sophisticated and targeted campaigns.” Saceanu noted that Irish organisations, like those globally, are struggling to secure their systems and private information, with industries that typically hold sensitive data, such as those in healthcare, finance and education, being particularly vulnerable. “We have seen a massive focus on targeting organisations that operate in critical infrastructure for various motivations – financially oriented or to disrupt operations. This means that there are more and more ransomware attacks on manufacturing, energy and healthcare that are not only encrypting data, but also exfiltrating this data to ask for enormous ransom payments because they know that these organisations cannot afford any disruption.” For Saceanu, this shift to an environment driven by data and under near constant threat has led organisations to experiment with advanced technologies such as AI in order to improve efficiency and spearhead innovation


Proper ID Verification Requires Ethical Technology

When it comes to identity security, security teams should regularly monitor, identify, analyze, and report risks in their environment. If exploited, these risks can be detrimental to an organization, its assets, and stakeholders. They can also undercut ethical standards of privacy and data protection. Running risk assessments is especially important when there is a lack of visibility in company processes and security gaps. Organizations can systematically assess their security measures surrounding user identity data and ensure compliance with privacy policies and regulatory standards. ... Transparency is among the most vital aspects of ethical identity verification. It requires organizations to be upfront about how they practice data collection and management, and how the data is used. This has to be reflected in the company policies, culture, and of course, its technology, including data storage and access. Users, i.e., customers from whom data is collected, should be able to access the policy terms easily at any point. ... When companies are looking to procure ethical technology, it’s important to account for factors like privacy, accessibility, security, and regulations. The above factors look at the perspective of the company using the tech and how they should operate it. 


Accelerating Business Growth Using AIOps and DevOps

The rapid evolution of AI brings forth several new potential opportunities and challenges. Today, AI drives the business growth of an enterprise in more ways than one. Artificial intelligence for IT Operations or AIOps is a new concept that encompasses big data, data mining, machine learning (ML) and AI. AIOps is a practice that blends AI with IT operations to improve operational processes. AIOps platforms automate, optimize and improve IT operations and provide users with real-time visibility and predictive alerts to minimize operational issues and proactively resolve issues that may have arisen to ensure ideal IT operations. ... Adopting AIOps helps DevOps through automation, predictive intelligence and better data-driven decisions. This collaboration fosters efficient processes, improved quality and continuous improvement to meet the ever-changing demands of the industry and customer requirements. ... AI makes it easier for DevOps teams to find patterns in data, make meaning from such data and form informed decisions on which resources and processes to allocate. The convergence of AIOps and DevOps processes can yield valuable insights that can help improve decision-making.


When is data too clean to be useful for enterprise AI?

Not cleaning your data enough causes obvious problems, but context is key. Google suggests pizza recipes with glue because that’s how food photographers make images of melted mozzarella look enticing, and that should probably be sanitized out of a generic LLM. But that’s exactly the kind of data you want to include when training an AI to give photography tips. Conversely, some of the other inappropriate advice found in Google searches might have been avoided if the origin of content from obviously satirical sites had been retained in the training set. “Data quality is extremely important, but it leads to very sequential thinking that can lead you astray,” Carlsson says. “It can end up, at best, wasting a lot of time and effort. At worst, it can go in and remove signal from your data, and actually be at cross purposes with what you need.” ... AI needs data cleaning that’s more agile, collaborative, iterative and customized for how data is being used, adds Carlsson. “The great thing is we’re using data in lots of different ways we didn’t before,” he says. “But the challenge is now you need to think about cleanliness in every one of those different ways in which you use the data.” Sometimes that’ll mean doing more work on cleaning, and sometimes it’ll mean doing less.


Architectural Intelligence – The Next AI

The vast majority of software has deterministic outcomes. If this, then that. This allows us to write unit tests and have functional requirements. If the software does something unexpected, we file a bug and rewrite the software until it does what we expect. However, we should consider AI to be non-deterministic. That doesn’t mean random, but there is an amount of unpredictability built in, and that’s by design. The feature, not a bug, is that the LLM will predict the most likely next word. "Most likely" does not mean "always guaranteed". For those of us who are used to dealing with software being predictable, this can seem like a significant drawback. However, there are two things to consider. First, GenAI, while not 100% accurate, is usually good enough. ... When considering AI components in your system design, consider where you are okay with "good enough" answers. I realize we’ve spent decades building software that does what it’s expected to do, so this may be a complex idea to think about. As a thought exercise, replace a proposed AI component with a human. How would you design your system to handle incorrect human input? Anything from UI validation to requiring a second person’s review. What if the User in User Interface is an AI? 


The Impact of Advanced Data Lineage on Governance

Advanced data lineage (ADL) provides a powerful set of tools for understanding data’s history. It is proactive and preventative, addressing data issues at that moment or before they happen. Advanced data lineage represents a significant evolution where historically, traditional data lineage tracks data movement and transformations linearly. Consequently, organizations often receive static reports that quickly become outdated in fast-changing data environments. ... As ADL transforms how organizations understand and manage their data, it requires a corresponding evolution in data governance practices. This transformation requires more than selecting the right software; it applies an adaptive framework that supports efficient assessments and actions on lineage information. An adaptive Data Governance framework is flexible enough to respond quickly to new insights provided by ADL, while still maintaining a structured approach to data management. With this shift comes increased and frequent interactions between adaptive DG teams and other departments to resolve issues. To do this well, a framework should clearly define roles, responsibilities, and escalation paths when addressing issues identified by ADL. This approach is agile while maintaining a solid methodological foundation.


Navigating AI Regulations: Key Insights and Impacts for Businesses

The historical risks associated with AI highlight the need for careful consideration and proactive management as these technologies continue to evolve. Addressing these challenges requires collaboration among technologists, policymakers, ethicists, and society at large to ensure that the development and deployment of AI provides positive contributions to society while also minimizing potential harms. AI systems raise significant data privacy concerns because they collect and process vast amounts of personal data. Regulatory frameworks establish guidelines for data protection. These ensure an individuals’ information is handled secretly, responsibly, and with their full consent. AI systems must be understandable, fair, incorporate human judgment, and be ethical. Trustworthy AI systems should perform reliably across various conditions and be resilient to errors or attacks. Developers must comply with privacy laws and safeguard personal data used in training AI models. This includes obtaining user consent for data usage and implementing strong security measures to protect sensitive information.
 


Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman

Daily Tech Digest - November 26, 2024

Just what the heck does an ‘AI PC’ do?

As the PC market moves to AI PCs, x86 processor dominance will lessen over time, especially in the consumer AI laptop market, as Arm-based AI devices grab more share from Windows x86 AI and non-AI laptops, according to Atwal. “However, in 2025, Windows x86-based AI laptops will lead the business segment,” Atwal said. ... “We see AI-enabled PCs evolving to provide more personalized, adaptive experiences that are tailored to each user’s needs,” Butler said. “The rise of generative AI was a pivotal moment, yet reliance on cloud processing raises concerns around data privacy.” Each component of a PC plays a unique role in making AI tasks efficient, but the NPU is key for accelerating AI computations with minimal power consumption, according to Butler. In general, he said, AI PCs assist in or handle routine tasks to be more efficient and intuitive for users without the need to access an external website or service. ... AI PCs can also boost productivity by handling routine tasks such as scheduling and organizing emails, and by enhancing collaboration with real-time translation and transcription features, according to Butler. 


Humanity Protocol: ‘We’re building a full credential ecosystem’

Distinguishing between humans and machines online has become more important than ever. Over the past years, the digital world has seen a proliferation of AI-fueled deepfake impersonations, bots and Sybil attacks, in which a single entity creates many false identities to gain influence. An increasing number of companies are trying to come up with solutions relying on blockchain technology. One of the more well-known projects is World Network, previously known as Worldcoin, which scans irises to confirm their users are human. But the space is seeing more and more competitors relying on biometrics to prove people are real – including Humanity Protocol. “There are definitely a bunch of companies that are trying to solve the whole Proof of Personhood problem,” the company’s founder Terence Kwok told Biometric Update in an interview earlier this month. “We’re lucky to be one of the few that have started launching, building a user base and joined the market.” The company launched a testnet in October, allowing users and developers to get their first taste of the platform and receive some free cryptocurrency. The project has so far signed up over a million people – moving quickly to catch up with World Network which currently has 15 million users, including 7 million verified through its Orb iris-scanning technology.


The way we measure progress in AI is terrible

Benchmark creators often don’t make the questions and answers in their data set publicly available either. If they did, companies could just train their model on the benchmark; it would be like letting a student see the questions and answers on a test before taking it. But that makes them hard to evaluate. Another issue is that benchmarks are frequently “saturated,” which means all the problems have been pretty much been solved. For example, let’s say there’s a test with simple math problems on it. The first generation of an AI model gets a 20% on the test, failing. The second generation of the model gets 90% and the third generation gets 93%. An outsider may look at these results and determine that AI progress has slowed down, but another interpretation could just be that the benchmark got solved and is no longer that great a measure of progress. It fails to capture the difference in ability between the second and third generations of a model. One of the goals of the research was to define a list of criteria that make a good benchmark. “It’s definitely an important problem to discuss the quality of the benchmarks, what we want from them, what we need from them,” says Ivanova. “The issue is that there isn’t one good standard to define benchmarks. This paper is an attempt to provide a set of evaluation criteria. That’s very useful.”


Governance Considerations and Pitfalls When Implementing GenAI

Many large organizations are still in the process of establishing robust information governance frameworks for their current environments. Now, they must also address questions about their readiness to manage the impact of Copilot1 and similar generative AI tools. These questions include whether they can uphold appropriate access, use, and management across their IT infrastructure. Additionally, organizations should assess whether new artifacts are being created that could introduce unforeseen regulatory risk. ... With Copilot, anything a user has permission to access may surface as part of a response to a query or prompt. Without Copilot, when users are over-permissioned and have access to documents that they should not, they would only uncover the document if actively searching for it. Therefore, excess permissions and failure to limit access to certain materials can potentially expose information to far more employees than intended. To manage this, organizations must be diligent in defining controls and thoroughly understand the range of materials that Copilot users can access at different permission levels. Notably, when Copilot is turned on for a user, every application within Microsoft 365 that has a Copilot element will have AI activated. 


Next-Gen Networking: Exploring the Utility of Smart Routers in Data Centers

In cases where smart routers offer automated network management capabilities, they usually do so based on software that provides features like the ability to reroute packets to help balance network load or discover new devices automatically when they join the network. In this sense, smart routers don’t really do anything all that new; the sorts of capabilities just mentioned have long been a standard part of network management software. The only differentiator for smart routers, perhaps, is that these devices come bundled with software that enables them to help manage networks automatically, instead of requiring additional network management tools for that purpose. In addition, there seems to be a focus in smart router land on the notion of hands-off network management. Instead of requiring admins to configure networking policies and apply them manually, smart routers promise in many cases to manage your networks for you. It's essentially an example of what you might categorize as NoOps. It’s worth noting, too, that in more than a few cases, smart router vendors are slapping the “AI” label on their devices. But like many vendors who profess to be selling AI-powered solutions today, they're using the term loosely to refer to any type of software that uses data analytics in some sort of way.


Digitising India with AI-based photogrammetry software

Photogrammetry is the capturing of measurements from photographs shot by drones, satellites, or aerial photography and generating maps, and 3D models even up to including a Geographic Information System (GIS). Traditionally, photogrammetric processing involved collecting a huge amount of data through manual efforts with post-processing taken care of by experts over a considerable period. The introduction of AI and machine learning into photogrammetry, has smoothened all these processes to make them fast as well as more automation-friendly. Now with AI photogrammetry software, one can explore thousands of aerial images automatically to acquire accurate topographic maps and also in real-time 3D models. ... Errors in land surveys can be very expensive and lead to many complications, especially in construction, farming, and city management. Using AI-based photogrammetry increases accuracy in measurement and reduces human errors in the process. AI algorithms improve the quality of the resultant maps and models by identifying and rectifying any anomalies in the data automatically. The system can also blend images from different sources, such as aerial pictures, LiDAR departments, as well as satellite images, to provide a better and more accurate picture of the land.


Will AI Kill Google? Past Predictions of Doom Were Totally Wrong

Sam Altman, the top executive overseeing ChatGPT, has said that AI has a good shot at shoving aside Google search. Bill Gates predicted that emerging AI will do tasks like researching your ideal running shoes and automatically placing an order so you'll "never go to a search site again." ... AI definitely could draw us away from Google in ways that smartphones and social media didn't. When you're planning a garden, an AI helper might guide you through where you want the flowers and fruit trees and hire help for you. No Googling necessary. "People are increasingly turning to ChatGPT to find information from the web, including the latest news," Altman's company, OpenAI, said. Maybe it's right to extrapolate from how people are starting to use AI today. Or maybe that's the mistake that Jobs made when he said no one was searching on iPhones. It wasn't wrong in 2010, but it was within a few years. Or what if AI upends how billions of us find information and we still keep on Googling? "The notion that we can predict how these new technologies are going to evolve is silly," said David B. Yoffie, a Harvard Business School professor who has spent decades studying the technology industry. 


Practical strategies to build an inclusive culture in cybersecurity

Despite meaningful progress, the cybersecurity and IT industries continue to face significant challenges in creating truly inclusive environments. Unconscious bias remains a pervasive issue, often influencing hiring, evaluation, and promotion processes, which can disadvantage women and other underrepresented groups. Retention is another ongoing challenge, as many organizations struggle to cultivate workplace cultures that are welcoming and supportive enough to retain diverse talent long-term. Barriers to entry and advancement persist, highlighting the need for continuous improvement and active intervention. While the industry has made strides in recognizing the importance of diversity, achieving full representation and inclusivity requires sustained commitment and effort. The current focus on diversity is encouraging, but only through consistent attention and action will the industry overcome these longstanding challenges and ensure a more equitable future. ... Work-life balance is another significant issue, particularly in cultures where traditional gender roles are still prevalent. Women often face greater expectations regarding balancing work and family, which can impact their career trajectory, especially in environments that lack flexible work arrangements. 


5 ways to achieve AI transformation that works for your business

"Never work in a silo and prepare to be wrong in terms of how you've set the technology up." Kollnig and her colleagues have implemented the Freshworks Customer Service Suite, an omnichannel support software with AI-powered chatbots and ticketing. She told ZDNET that working closely with the technology partner has helped her team to deliver a successful AI transformation. "So, for one of our AI projects, we established our basic set-up and said, 'Freshworks, come in and audit it. Tell us, are we doing this right? Would you do it differently?'" she said. ... Moyes said professionals in all sectors should take some sensible steps, including working with people who know more about AI. "Within every organization, there are groups of technology leads who are interested and want to innovate, evolve, and push," he said. "Lean on them. Learn from those at the coal face who want to do AI. There are no guarantees that the technologies you introduce will be the next best thing, but at least you'll be aware of the potential." Moyes said SimpsonHaugh is looking at how AI can reduce time-intensive tasks, such as summarizing text, and help staff find images to create early-stage design proposals.


What Does Enterprise-Wide Cybersecurity Culture Look Like?

Whoever is championing enterprise-wide security needs to secure buy-in from everyone within an organization. At the top, that means getting the C-suite and board to throw their weight behind security. “At the end of the day, if you don't have the CEO on board and the CEO isn't … voicing the same level of prioritization, then it will be something that's viewed as a half step back from … fundamental business priorities,” Cannava warns. Effective communication is a big part of getting that buy-in from leadership. How can security leaders explain to their boards and fellow executives that security is an essential business enabler? “Really [convert] the technology language or cyber language or jargon into how will … that risk potential impact revenue or reputation or our compliance?” says Landen. Tabletop exercises can be a powerful way to not just tell but show executives the value of cybersecurity. Walking through various cybersecurity incident scenarios can demonstrate the vital connection security has to operations and business outcomes. Ping Identity periodically engages multiple members of the C-suite in these exercises. “Not only do you know learn what the gap is, you also learn by doing … you're pulled in and engaged as a member of the C-suite, and now you're invested,” he says.



Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe