Daily Tech Digest - December 17, 2024

Together For Good: How Humans And AI Can Close The Health Gap

While the potential is immense, AI’s effectiveness in closing the health gap hinges on more than just technological advancement. AI must be deliberately tailored, trained, tested, and targeted to bring out the best in and for people and the planet. This means anchoring AI development and deployment in a holistic understanding of humans, and the environment they evolve in. It also entails the design of ethical frameworks, transdisciplinary collaboration, and 360-degree strategies that systematically bring out the complementarity of AI and NI, including the knowledge, experience, and intuition of humans. ... Closing the gap of preventable health inequalities cannot be achieved by advanced algorithms alone. It requires us to integrate the strengths of artificial intelligence with natural intelligence — the knowledge, ethical judgment, empathy, and cultural understanding of human beings — to ensure that solutions are both effective and just. By anchoring AI in localized insight and human expertise, we can align personal health improvements (micro) with community-led action (meso), informed national policies (macro), and globally coordinated strategies (meta), delivering equitable outcomes in every arena of the organically evolving kaleidoscope that we are part of.


How to Take a Security-First Approach to AI Implementation

Whether it's a third-party tool or an in-house project, thorough research and a clear plan will go a long way toward reducing risks. When developing guidelines for AI implementation, the first step is to match the business case with available tools, remembering that some models are more suited to specific tasks than others. Practicing a Secure by Design strategy from the ground up can future-proof AI implementation. These principles ensure that security is prioritized throughout the entire lifecycle of an AI product. A Secure by Design methodology implements multiple layers of defense against cyberthreats. During the planning stage, the security team's input is critical for a Secure by Design approach. Vendor trust is also vital. Evaluating vendors for trustworthiness and auditing contracts thoroughly, including regular monitoring of updates to vendor terms and conditions, are imperative. It is essential for data quality to be assessed for metrics like accuracy, relevance, and completeness.... Keeping security at the forefront from the get-go confers advantages, especially as tools and risks evolve. Safer AI is on the horizon as more users adhere to best practices through regulatory frameworks, international collaborations, and security-first use cases. 


Data Governance in DevOps: Ensuring Compliance in the AI Era

Implementing effective CI/CD pipeline governance in the age of AI requires a multifaceted approach. It starts with establishing clear policies outlining compliance requirements, security standards, and ethical guidelines for AI development. These policies should be embedded into the pipeline through automated checks and gates. Leveraging advanced automation tools for continuous compliance checking throughout the pipeline is essential. These tools can scan code for vulnerabilities, check for adherence to coding standards, and even analyze AI models for potential biases or unexpected behaviors. Robust version control and change management processes are also crucial components of pipeline governance. They ensure that every change to the codebase or AI model is tracked, reviewed, and approved before progressing through the pipeline. We can't forget logging and auditing. Comprehensive logging and monitoring of all pipeline activities provide the necessary audit trails for compliance demonstration and post-incident analysis. In the context of AI, this extends to monitoring deployed models for performance drift or unexpected behaviors, ensuring ongoing compliance post-deployment. 


Top 10 Cloud Data Center Stories of 2024

If you work in the data center industry, you may use term on-premise (or on-prem) frequently. But have you ever stopped to wonder how the phrase entered the data center lexicon – or considered why on-premise doesn’t make grammatical sense? In a nutshell, the answer is that it should be on-premises – note the s on the end – because premise and premises are different words. If not, you’ll be enlightened by our coverage of the history of the term on-prem and why it has long irked certain CIOs. ... The more complex your cloud architecture becomes, the harder it is to identify security risks and other misconfigurations. That’s why the ability to automate security assessments is growing increasingly important. But how good are the solutions that cloud providers offer for this purpose? To find out, we took a close look at compliance reporting tools from Azure and GCP. The takeaway was that these solutions can automate much of the work necessary to identify misconfigurations that could trigger compliance violations, but they’re no substitute for human experts. ... What was less often discussed – but equally important – is the role of edge infrastructure in AI. That’s what we focused on in our report about edge AI, meaning AI workloads that run at the network edge instead of in traditional cloud data centers.


Clop Ransomware Takes Responsibility for Cleo Mass Exploits

Whether or not Clop is actually responsible for attacks targeting various types of Cleo's MFT software couldn't be confirmed. Separately, on Dec. 10, British cybersecurity expert Kevin Beaumont reported having evidence that the ransomware group Termite possessed a zero-day exploit for vulnerabilities in the Cleo products. Security experts said both groups may well have been involved, either separately or together. "Although Cl0p posted a message on their website, this is not hard evidence pointing to a single threat group's involvement. Therefore, any discussion of whether Termite or Cl0p are behind this exploit is speculation until proven with other indicators/evidence," said Christiaan Beek, senior director of threat analytics at cybersecurity firm Rapid7. "We have seen Cl0p utilize complex chains similar to this vulnerability in multiple file transfer use cases before, such as MOVEit and Accellion FTA in 2021," Beek added.  ... The latest attacks appear to target in part CVE-2024-50623, an unrestricted file upload vulnerability in the managed file transfer products Cleo Harmony, VLTrader and LexiCom. Exploiting the vulnerability enables attackers to remotely execute code with escalated privileges.


Balancing security and user experience to improve fraud prevention strategies

There may not be one right way of handling the balance of security and user-friendly customer experience. Different institutions and their customers will have different needs, and processes might vary somewhat. But overall, there should be clear, easy-to-follow standards and checkpoints built into whatever financial institutions do. For instance, some banks or credit card companies may allow customers to institute their own stop gap for purchases over a certain amount, which may reduce the incentive for relatively large-scale fraud. These companies could also introduce some level of personalization into the processes, like how a credit or debit card could be easily turned on and turned off by customers themselves via an app or site. ... Meanwhile, it seems like barely a day goes by when there’s not some coverage of fraud or a release of personal info via hacking from some corporation, and some speculate increasingly advanced technology may make it easier for those who want to perpetrate fraud. With this in mind, there may be a greater emphasis placed on enhancing security and experimentation in what different institutions do to find what works best and to have a process in place that allows customers to have confidence in their banks and credit card companies.


Generative AI Is Just the Beginning — Here’s Why Autonomous AI is Next

Embracing this technology will unlock significant opportunities to improve organizational efficiency and accuracy. But before we dive into this, let us start with some definitions. Autonomous AI refers to systems that can perform tasks without human intervention. In contrast, generative AI systems focus on content creation based on existing data. What sets autonomous AI apart is its ability to self-manage. Understanding this difference is crucial, enabling organizations to use AI for more complex operations like predictive maintenance and resource optimization. ... The first step in successfully integrating autonomous AI into your organization is implementing robust data governance frameworks to support these advanced systems. Establish clear data privacy and transparency guidelines to ensure autonomous AI operates within ethical boundaries. It’s crucial to incorporate technical controls that prevent the AI from making reckless decisions, aligning its actions with your organizational values. ... When exploring the future of autonomous AI within your organization, it’s crucial to monitor and evaluate your autonomous AI systems regularly. Continuous assessment allows you to understand how the AI is performing and identify potential improvement areas.


Privacy by design approach drives business success in today’s digital age

Businesses that adhere to data privacy practices validate the upkeep of customer data and data privacy, earning them a stronger brand reputation. They should also ensure privacy is embedded in the organisation’s framework across the technology, products, and services, which is also known as Privacy by Design (PbD). ... The PbD framework was developed by Dr. Ann Cavoukian, Information & Privacy Commissioner of Ontario jointly with the Dutch Data Protection Authority and the Netherlands Organisation for Applied Scientific Research in 1995. It aimed to cultivate and embed privacy defences to safeguard data in the design process of a product, service, or system. Privacy becomes the default setting built at the very beginning rather than an afterthought. This framework is founded on seven core principles: being proactive and not reactive, having privacy as the default setting, having privacy embedded into design, full functionality, end-to-end security, visibility and transparency, and respect for user privacy. ... The PbD approach which is proactive indicates the company’s commitment to protecting the customer’s sensitive personal information. PbD enables companies to have personalised engagement with customers while respecting their privacy preferences.


Top 10 cybersecurity misconfigurations: Nail the setup to avoid attacks

Despite the industry-wide buzz about things like zero-trust, which is rooted in concepts such as least-privileged access control, this weakness still runs rampant. CISA’s publication calls out excessive account privileges, elevated service accounts, and non-essential use of elevated accounts. Anyone who has worked in IT or cyber for some time knows that many of these issues can be traced back to human behavior and the general demands of working in complex environments. ... Another fundamental security control that makes an appearance is the need to segment networks, a practice again that ties to the broader push for zero trust. By failing to segment networks, organizations are failing to establish security boundaries between different systems, environments, and data types. This allows malicious actors to compromise a single system and move freely across systems without encountering friction and additional security controls and boundaries that could impede their nefarious activities. The publication specifically calls out challenges where there is a lack of segmentation between IT and OT networks, putting OT networks at risk, which have real-world implications around security and safety in environments such as industrial control systems.


Why Indian enterprises are betting big on hybrid multi-cloud strategies?

The multi-cloud strategy in India is deeply intertwined with the country’s broader digital transformation initiatives. The Government of India’s Digital India program and initiatives like the National Cloud Initiatives are providing a robust framework for cloud adoption. ... The importance of edge computing is growing, and the rollout of 5G is opening up new possibilities for distributed cloud architectures. Telecom titans like Jio and Airtel are investing substantially in cloud-native infrastructure, creating ripple effects throughout industries. On the other hand, startup ecosystems play a crucial role too. Bangalore, often called the Silicon Valley of India, has become a hotbed for cloud-native technologies. Companies and numerous cloud consulting firms are developing cutting-edge multi-cloud solutions that are gaining global recognition. Foreign investments are pouring in. Major cloud providers like AWS, Microsoft Azure, and Google Cloud are expanding their infrastructure in India, with dedicated data centers that meet local compliance requirements. This local presence is critical for enterprises concerned about data sovereignty and latency.



Quote for the day:

"You aren’t going to find anybody that’s going to be successful without making a sacrifice and without perseverance." -- Lou Holtz

Daily Tech Digest - December 16, 2024

What IT hiring looks like heading into 2025

AI isn’t replacing jobs so much as it is reshaping the nature of work, said Elizabeth Lascaze, a principal in Deloitte Consulting’s Human Capital practice. She, too, sees evidence that entry-level roles focused on tasks like note-taking or basic data analysis are declining as organizations seek more experienced workers for junior positions. “Today’s emerging roles require workers to quickly leverage data, generate insights, and solve problems,” she said, adding that those skilled in using AI, such as cybersecurity analysts applying AI for threat detection, will be highly sought after. Although the adoption of AI has led to some “growing pains,” many workers are actually excited about it, Lascaze said, with most employees believing it will create new jobs and enhance their careers. “Our survey found that just 24% of early career workers and 14% of tenured workers fear their jobs will be replaced by AI,” Lascaze said. “Tenured workers are more likely to lead organizational strategy, so they may prioritize AI’s potential to improve efficiency, sophistication, and work quality in existing roles rather than AI’s potential to eliminate certain positions. “These workers reported being slightly more focused on building AI fluency than early-career employees,” Lascaze said. 


The Future of AI (And Travel) Relies on Synthetic Data

Synthetic data enhances accuracy and fairness in AI models as organic data can be biased or unbalanced, leading to ML models failing to represent diverse populations accurately. With synthetic data, researchers can create datasets that more accurately reflect the demographics they intend to serve, thereby minimizing biases and improving overall model robustness. ... Synthetic data can be a double-edged sword. While it addresses data privacy and availability challenges, it can inadvertently carry or magnify biases embedded in the original dataset. When source data is flawed, those imperfections can cascade into the synthetic version, skewing results — a critical concern in high-stakes domains like healthcare and finance, where precision and fairness are paramount. To counteract this, having a human in the loop is super important. While there’s a temptation to use synthetic data to fill in every gap for better accuracy and fairness, we understood that running synthetic searches for every flight combination possible globally for our price tracking and predictions feature could overwhelm our booking system and impact real travelers organically searching for flights. Synthetic data has limitations that go beyond bias. 


9 Cloud Service Adoption Trends

Most organizations are building modern cloud computing applications to enable greater scalability while reducing cost and consumption costs. They’re also more focused on the security and compliance of cloud systems and how providers are validating and ensuring data protection. “Their main focus is really around cost, but a second focus would be whether providers can meet or exceed their current compliance requirements,” says Will Milewski, SVP of cloud infrastructure and operations at content management solution provider Hyland. ... There’s a fundamental shift in cloud adoption patterns, driven largely by the emergence of AI and ML capabilities. Unlike previous cycles focused primarily on infrastructure migration, organizations are now having to balance traditional cloud ROI metrics with strategic technology bets, particularly around AI services. According to Kyle Campos, chief technology and product officer at cloud management platform provider CloudBolt Software, this evolution is being catalyzed by two major forces: First, cloud providers are aggressively pushing AI capabilities as key differentiators rather than competing on cost or basic services. Second, organizations are realizing that cloud strategy decisions today have more profound implications for future innovation capabilities than ever before.


We’ve come a long way from RPA: How AI agents are revolutionizing automation

As the AI ecosystem evolves, a significant shift is occurring toward vertical AI agents — highly specialized AI systems designed for specific industries or use cases. As Microsoft founder Bill Gates said in a recent blog post: “Agents are smarter. They’re proactive — capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. “ Unlike traditional software-as-a-service (SaaS) models, vertical AI agents do more than optimize existing workflows; they reimagine them entirely, bringing new possibilities to life. ... The most profound shift in the automation landscape is the transition from RPA to multi-agent AI systems capable of autonomous decision-making and collaboration. According to a recent Gartner survey, this shift will enable 15% of day-to-day work decisions to be made autonomously by 2028. These agents are evolving from simple tools into true collaborators, transforming enterprise workflows and systems. ... As AI agents progress from handling tasks to managing workflows and entire jobs, they face a compounding accuracy challenge. Each additional step introduces potential errors, multiplying and degrading overall performance. 


8 reasons why digital transformations still fail

“People got really excited about, ‘We’re going to transform,’” Woerner says, but she believes part of the problem lies with leaders who “didn’t have the discipline to make the hard choices early on” to get employee buy-in. Ranjit Varughse, CIO of automotive paint and equipment firm Wesco Group, agrees. “The first challenge is getting digital transformation buy-in from teams at the outset. People are creatures of habit, making many hesitant to change their existing systems and processes,” he says. “Without a clear change management strategy to get a team aligned, ERP implementations in particular can be slow, stall, or even fail entirely.” ... Digital transformation isn’t a technology problem, it’s about understanding how people actually work, not how we think they should work, Wei says. “At PropertySensor, we scrapped our first version after realizing real estate agents needed mobile-first solutions, not desktop dashboards,” he says. ... “People, process, and technology” is a common phrase technology leaders use when discussing the critical elements of a transformation. “But the real focus should be people, people, people,” echoes Megan Williams, vice president of global technology strategy and transformation at TransUnion.


How companies can address bias and privacy challenges in AI models

Companies understand that AI adoption is existential to their survival, with the winners of tomorrow being determined by their ability to harness AI effectively. Furthermore, they understand that their brand’s reputation is one of their most valuable assets. Missteps with AI—especially in mission-critical contexts (think of a trading algorithm going awol, a breach of user privacy, or a failure to meet safety standards)—can erode public trust and harm a company’s bottom line. This could have dire consequences. With a company’s competitiveness and potentially its very survival at stake, AI governance becomes a business imperative that they cannot afford to ignore. ... Certainly, we see a lot of activity from the government – both at the state and federal levels – which is creating a fragmented approach. We also see leading companies who understand that adopting AI is crucial to their future and want to move fast. They are not waiting for the regulatory environment to settle and are taking a leadership position in adopting responsible AI principles to safeguard their brand reputations. So, I believe companies will act intelligently out of self-interest to accelerate their AI initiatives and increase business returns. 


Ensuring AI Accountability Through Product Liability: The EU Approach and Why American Businesses Should Care

In terms of a substantive law regulating AI (which can be the basis of the causality presumption under the proposed AI Liability Directive), the European Union’s Artificial Intelligence Act (AI Act) entered into force on August 1, 2024, becoming the first comprehensive legal framework for AI globally. The AI Act applies to providers and developers of AI systems that are marketed or used within the EU (including free-to-use AI technology), regardless of whether those providers or developers are established in the EU or a separate country. The EU AI Act sets forth requirements and obligations for developers and deployers of AI systems in accordance with risk-based classification system and a tiered approach to governance, which are two of the most innovative features of the AI Act. The Act classifies AI applications into four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. AI systems deemed to pose an unacceptable risk, such as those that violate fundamental rights, are outright banned. ... High-risk AI systems, which include areas such as health care, law enforcement, and critical infrastructure, will face stricter regulatory scrutiny and must comply with rigorous transparency, data governance, and safety protocols. 


Agentic AI is evolving into specialised assistants, enabling the workforce to focus on value-adding tasks

A structured discovery approach is required to identify high impact areas for AI adoption rather than siloed use-cases. Infosys Topaz comprises verticalised blueprints, industry catalogues and strategic AI value map analysis capabilities. We have created playbooks for industries that lay out a structured roadmap to embed and mature GenAI into core processes and operations and across the IT landscape. This includes the right use-cases across the value stream spanning operations, customer experience, research and development, etc. As part of our Responsible AI by Design approach, we implement robust technical and process guardrails to ensure privacy and security. These include impact assessments, audits, automated policy enforcement, monitoring tools, and runtime safeguards to filter inputs and outputs for generative AI. We also use red-teaming and advanced testing tools to identify vulnerabilities and fortify AI models. Additionally, we employ privacy-preserving techniques such as Homomorphic Encryption and Secure Multi-Party Computation to enhance the security and resilience of our AI solutions. ... AI-driven monitoring tools detect inefficiencies in IT infrastructure, leveraging predictive analytics and forecasting techniques to improve utilisation in real time.


Security leaders top 10 takeaways for 2024

One of the most significant new rules, which has received the lion’s share of press attention, is the ‘materiality’ component, or the need to report “material” cybersecurity incidents to the SEC within four business days of discovery. At issue is whether the incident led to significant risk to the organization and its shareholders. If so, it’s defined as material and must be reported within four days of this determination being made (not its initial discovery). “Materiality extends beyond quantitative losses, such as direct financial impacts, to include qualitative aspects, like reputational damage and operational disruptions,” he says. McGladrey says the SEC’s materiality guidance underscores the importance of investor protection in relation to cybersecurity events and, if in doubt, the safest path is reporting. “If a disclosure is uncertain, erring on the side of transparency safeguards shareholders,” he tells CSO. ... As a virtual or fractional CISO service, Sage has observed startups engaging vCISO services earlier, in pre-seed and Series A stage and, in some cases, before they’ve finalized their minimum viable product. “Small technology consulting and boutique software development groups are looking for ISO 27001 certifications to ensure they can continue serving their larger customers,” she tells CSO.


Emotional intelligence in IT management: Impact, challenges, and cultural differences

While delivering results is the primary goal of any leader, you can’t forget that you’re managing people, not machines. Emotional intelligence helps balance the need for productivity with fairness and empathy. One way to illustrate this balance is through handling difficult conversations about career moves. Managing a team of over 100 support specialists for several years gave me the opportunity to conduct an interesting experiment. Many employees tend to hide the fact that they are exploring job opportunities elsewhere until the last minute. This creates unnecessary tension and can lead to higher turnover. However, if a manager removes the stigma around job interviews and treats them as part of market research, it encourages open communication. ... Emotionally intelligent managers possess the ability to identify the core of a conflict without letting it escalate. Attempting to gather every single piece of information is not always helpful. Instead, managers should focus on resolving conflicts, as often the solution is already within the team. This does not mean conducting surveys or asking for feedback from each person, as delicate situations require a more refined approach. A manager should observe, analyze, and extract the most significant points quickly and intuitively, enabling conflict resolution before it grows into a larger issue.



Quote for the day:

“Things come to those who wait, but only the things left by those who hustle” -- Abraham Lincoln

Daily Tech Digest - December 15, 2024

Navigating the Future: Cloud Migration Journeys and Data Security

To meet the requirements of DORA and future regulations, business leaders must adopt a proactive and reflexive approach to cybersecurity. Strong cyber hygiene practices must be integrated throughout the business, ensuring consistency in how data is handled, protected, and accessed. It is important to note at this juncture that enhanced data security isn’t purely focused on compliance. Modern IT researchers and business analysts have been studying what differentiates the most innovative companies for decades and have identified two key principles that help businesses achieve this: Unified Control and Federated Protection. ... Advancements in data security technologies are reshaping the cloud landscape, enabling faster and more secure migrations. Privacy Enhancing Technologies (PETs) like dynamic data masking (DDM), tokenisation, and format-preserving encryption help businesses anonymise sensitive data, reducing breach risks while keeping cloud adoption fast and flexible. However, as businesses will inevitably adopt multi-cloud strategies to support their processes, they will require interoperable security platforms that can seamlessly integrate across multiple cloud environments. 


Maximizing AI Payoff in Banking Will Demand Enterprise-Level Rewiring

Beyond thinking in broad strokes of AI’s applicability in the bank, McKinsey holds that an institution has to be ready to adopt multiple kinds of AI set up in a way to work with each other. This includes analytical AI — the types of AI that some banks have been using for years for credit and portfolio analysis, for instance — and generative AI, in the forms of ChatGPT and others, as well as “agentic AI.” In general, agentic AI uses AI that applies other types of AI to perform analyses and solve problems as a “virtual coworker.” It’s a developing facet of AI and, as described in the report, is meant to manage multiple AI inputs, rather than having a bank lean on one model. ... “You measure the outcomes you want to achieve and at the end of the pilot you will typically come out with a very good understanding of how to scale it,” Giovine says. Over six to 12 months after the pilot, “you can scale it over a good chunk of the domain.” And here, the consultant says, is where the bonus kicks in: Often a good deal of the work done to bring AI thinking to one domain can be re-used. This applies to both the business thinking and technology.


Synthetic data has its limits — why human-sourced data can help prevent AI model collapse

The more AI-generated content spreads online, the faster it will infiltrate datasets and, subsequently, the models themselves. And it’s happening at an accelerated rate, making it increasingly difficult for developers to filter out anything that is not pure, human-created training data. The fact is, using synthetic content in training can trigger a detrimental phenomenon known as “model collapse” or “model autophagy disorder (MAD).” Model collapse is the degenerative process in which AI systems progressively lose their grasp on the true underlying data distribution they’re meant to model. This often occurs when AI is trained recursively on content it generated, leading to a number of issues:Loss of nuance: Models begin to forget outlier data or less-represented information, crucial for a comprehensive understanding of any dataset. Reduced diversity: There is a noticeable decrease in the diversity and quality of the outputs produced by the models. Amplification of biases: Existing biases, particularly against marginalized groups, may be exacerbated as the model overlooks the nuanced data that could mitigate these biases. Generation of nonsensical outputs: Over time, models may start producing outputs that are completely unrelated or nonsensical.


The Macy’s accounting disaster: CIOs, this could happen to you

It wasn’t outright fraud or theft. But that’s merely because the employee didn’t try to steal. But the same lax safeguards that allowed expense dollars to be underreported could have just as easily allowed actual theft. “What will happen when someone actually has motivation to commit fraud? They could have just as easily kept the $150 million,” van Duyvendijk said. “They easily could have committed mass fraud without this company knowing. (Macy’s) people are not reviewing manual journals very carefully.” ... “It’s true that most ERPs are not designed to catch erroneous accounting,” she said. “However, there are software tools that allow CFOs and CAOs to create more robust controls around accounting processes and to ensure the expenses get booked to the correct P&L designation. Initiating, approving, recording transactions, and reconciling balances are each steps that should be handled by a separate member of the team. There are software tools that can assist with this process, such as those that enable use of AI analytics to assess actual spend and compare that spend to your reported expenses. Some such tools use AI to look for overriding journal entries that reverse expense items and move those expenses to a balance sheet account.”


Digital Nomads and Last-Minute Deals: How Online Data Enables Offline Adventures

Along with remote work preference, the pandemic boosted another trend. Many emerged from it more spontaneous, seeing how travel can be restricted so suddenly and for so long. Even before, millennials were ready to embrace impromptu travel, with half of them having planned last-minute vacations. For digital nomads, last-minute deals for flights and hotels are even more important as they need to adapt to changing situations quickly to strike a work-life balance on the go. This opens opportunities for websites to offer services that assist digital nomads in finding the best last-minute deals. ... Many of the first successful startups by the nomads were teaching about the nomadic lifestyle or connecting the nomads with each other. For example, some websites use APIs to aggregate data about the suitability of cities for remote work. Drawing data from various online sources in real time, such platforms can constantly provide information relevant to traveling remote workers. And the relevant information is very diverse. The aforementioned travel and hospitality prices and deals alone generate volumes of data every second. Then, there is information about security and internet stability in various locations, which requires reliable and constantly updated reviews.


It’s not what you know, it’s how you know you know it

Developers and technologists have been learning to code using online media such as blogs and videos increasingly in the last four years according to the Stack Overflow Developer Survey–60% in 2021 increased to 82% in 2024. The latest resource that developers could utilize for learning is generative AI which is emerging as a key tool that offers real-time problem-solving assistance, personalized coding tips, and innovative ways to enhance skill development seamlessly integrated within daily workflows. There has been a lot of excitement in the world of software development about AI’s potential to increase the speed of learning and access to more knowledge. Speculation abounds as to whether learning will be helped or hindered by AI advancement. Our recent survey of over 700 developers and technologists reveals the process of knowing things is just that—a process. New insights about how the Stack Overflow community learns demonstrate that software professionals prefer to gain and share knowledge through hands-on interactions. Their preferences for sourcing and contributing to groups or individuals (or AI) provides color on the evolving landscape of knowledge work.


What is data science? Transforming data into value

While closely related, data analytics is a component of data science, used to understand what an organization’s data looks like. Data science takes the output of analytics to solve problems. Data scientists say that investigating something with data is simply analysis, so data science takes analysis a step further to explain and solve problems. Another difference between data analytics and data science is timescale. Data analytics describes the current state of reality, whereas data science uses that data to predict and understand the future. ... The goal of data science is to construct the means to extract business-focused insights from data, and ultimately optimize business processes or provide decision support. This requires an understanding of how value and information flows in a business, and the ability to use that understanding to identify business opportunities. While that may involve one-off projects, data science teams more typically seek to identify key data assets that can be turned into data pipelines that feed maintainable tools and solutions. Examples include credit card fraud monitoring solutions used by banks, or tools used to optimize the placement of wind turbines in wind farms.


Tech Giants Retain Top Spots, Credit Goes to Self-Disruption

Companies today know they are not infallible in the face of evolving technologies. They are willing to disrupt their tried and tested offerings to fully capitalize on innovation. This ability of "dual transformation" - sustaining as well as reinventing the core business - is a hallmark of successful incumbents. It enables companies to optimize their existing operations while investing in the future, ensuring they are not caught flat-footed when the next wave of disruption hits. And because they have capital, talent and resources, they are already ahead of newer players. ... There is also a core cultural shift to encourage innovative thinking. Amazon implemented its famous "two-pizza teams" approach, where small, autonomous groups work on focused projects with minimal bureaucracy. Launched during the dot-com boom, Amazon subsequently ventured into successful innovations, including Prime, AWS and Alexa. Google's longstanding "20% time" policy, which allows employees to dedicate a portion of their workweek to passion projects, resulted in breakthrough products including AdSense and Google News. Drawing from decades of experience, these organizations know the whole is greater than the sum of its parts.


The Power of the Collective Purse: Open-Source AI Governance and the GovAI Coalition

Collaboration and transparency often go hand in hand. One of the most significant outcomes of the GovAI Coalition’s work is the development of open-source resources that benefit not only coalition members but also vendors and uninvolved governments. By pooling resources and expertise, the coalition is creating a shared repository of guidelines, contracting language, and best practices that any government entity can adapt to their specific needs. This collaborative, open-source initiative greatly reduces the transaction costs for government agencies, particularly those that are understaffed or under-resourced. While the more expansive budgets and technological needs of larger state and local governments sometimes lead to outsized roles in Coalition standard-setting, this allows smaller local governments, which may lack the capacity to develop comprehensive AI governance frameworks independently, to draw on the Coalition’s collective institutional expertise. This crowd-sourced knowledge ensures that even the smallest agencies can implement robust AI governance policies without having to start from scratch.


Redefining software excellence: Quality, testing, and observability in the age of GenAI

Traditional test automation has long relied on rigid, code-based frameworks, which require extensive scripting to specify exactly how tests should run. GenAI upends this paradigm by enabling intent-driven testing. Instead of focusing on rigid, script-heavy frameworks, testers can define high-level intents, like “Verify user authentication,” and let the AI dynamically generate and execute corresponding tests. This approach reduces the maintenance overhead of traditional frameworks, while aligning testing efforts more closely with business goals and ensuring broader, more comprehensive test coverage. ... QA and observability are no longer siloed functions. GenAI creates a semantic feedback loop between these domains, fostering a deeper integration like never before. Robust observability ensures the quality of AI-driven tests, while intent-driven testing provides data and scenarios that enhance observability insights and predictive capabilities. Together, these disciplines form a unified approach to managing the growing complexity of modern software systems. By embracing this symbiosis, teams not only simplify workflows but raise the bar for software excellence, balancing the speed and adaptability of GenAI with the accountability and rigor needed to deliver trustworthy, high-performing applications.



Quote for the day:

"Success is not the key to happiness. Happiness is the key to success. If you love what you are doing, you will be successful." -- Albert Schweitzer

Dily Tech Digest - December 14, 2024

How Conscious Unbossing Is Reshaping Leadership And Career Growth

Conscious unbossing presents both challenges and opportunities for organizations. On the one hand, fewer employees pursuing traditional leadership tracks can create gaps in decision-making, team development, and operational consistency. On the other hand, organizations that embrace unbossing as a cultural strategy can thrive. Novartis is a prime example, fostering a culture of curiosity and empowerment that drives both engagement and innovation. By breaking down rigid hierarchies, they’ve shown how unbossed leadership can be a strategic advantage rather than a liability. ... Conscious unbossing is transforming how we think about leadership and career progression. Organizations that adapt by redefining leadership roles, offering flexible career pathways, and building cultures rooted in curiosity and empathy will thrive. Companies like Novartis, Patagonia, and Microsoft have proven that unbossed leadership isn’t a limitation—it’s an opportunity to innovate and grow. By embracing this shift, businesses can create resilient, dynamic teams and ensure leadership continuity. However, this approach also comes with challenges that organizations must navigate to ensure its success. One potential downside is the risk of role ambiguity. 


Why agentic AI and AGI are on the agenda for 2025

We’re ready to move beyond basic now, and what we’re seeing is an evolution towards a digital co-worker – an agent. Agents are really those digital coworkers, our friends, that are going to help us to do research, write a text, and then publish it somewhere. So you set the goal – let’s say, run research on some telco and networking predictions for next year – and an agent would do the research and run it by you, and then push it to where it needs to go to get reviewed, edited, and more. You would provide it with an outcome, and it will choose the best path to get to that outcome. Right now, Chatbots are really an enhanced search engine with creative flair. But Agentic AI is the next stage of evolution, and will be used across enterprises as early as next year. This will require increased network bandwidth and deterministic connectivity, with compute closer to users – but these essentials are already being rolled out as we speak, ensuring Agentic AI is firmly on the agenda for enterprises in the new year. ... Amid the AI rush, we’ve been focused on the outcomes rather than the practicalities of how we’re accessing and storing the data being generated. But concerns are emerging. Where does the data go? Does it disappear in a big cloud? Concerns are obviously being raised in many sectors, particularly in the medical space in which, medical records cannot leave state/national borders. 


Robust Error Detection to Enable Commercial Ready Quantum Computers from Quantum Circuits

Quantum Circuits has the goal of first making components that are correct and then scaling the systems. This is part of the larger goal of making commercial ready quantum computers. What is meant by commercial ready quantum computers ? This means you can bet your business or company on the results of a quantum computer. Just as we rely today on servers and computers than provide services via cloud computer systems. Being able trust and rely on quantum computers means systems that are repeatable, predictable and trusted. They have built an 8 qubit system and enterprise customers have been using them. Customers have said that using error mitigation and error detection can enable them to get far more utility from Quantum Circuits than competing quantum computers. Error suppression and error mitigation are common techniques and have intensive efforts by most quantum computer companies and the entire Quantum computer community. Quantum Circuits’ error-detecting dual-rail qubits innovation allows errors to be detected and corrected first to avoid disrupting performance at scale. This system will enable a 10x reduction in resource requirements for scalable error correction.


5 reasons why Google's Trillium could transform AI and cloud computing - and 2 obstacles

Trillium is designed to deliver exceptional performance and cost savings, featuring advanced hardware technologies that set it apart from earlier TPU generations and competitors. Key innovations include doubled High Bandwidth Memory (HBM), which improves data transfer rates and reduces bottlenecks. Additionally, as part of its TPU system architecture, it incorporates a third-generation SparseCore that enhances computational efficiency by directing resources to the most important data paths. There is also a remarkable 4.7x increase in peak compute performance per chip, significantly boosting processing power. These advancements enable Trillium to tackle demanding AI tasks, providing a strong foundation for future developments and applications in AI. ... Trillium is not just a powerful TPU; it is part of a broader strategy that includes Gemini 2.0, an advanced AI model designed for the "agentic era," and Deep Research, a tool to streamline the management of complex machine learning queries. This ecosystem approach ensures that Trillium remains relevant and can support the next generation of AI innovations. By aligning Trillium with these advanced tools and models, Google is future-proofing its AI infrastructure, making it adaptable to emerging trends and technologies in the AI landscape.


How Industries Are Using AI Agents To Turn Data Into Decisions

In the past, this required hours of manual work to standardize the various file formats — such as converting PDFs to spreadsheets — and reconcile inconsistencies like differing terminologies for revenue or varying date formats. Today, AI agents automate these tasks with human supervision, adapting to schema changes dynamically and normalizing data as it comes in. ... While extracting insights is vital, the ultimate goal of any data workflow is to drive action. Historically, this has been the weakest link in the chain. Insights often remain in dashboards or reports, waiting for human intervention to trigger action. By the time decisions are made, the window of opportunity may already have closed. AI agents, with humans in the loop, are expediting the entire cycle by bridging the gap between analysis and execution. ... The advent of AI agents signals a new era in data management — one where workflows are no longer constrained by team bandwidth or static processes. By automating ETL, enabling real-time analysis and driving autonomous actions, these agents, with the right guardrails and human supervision, are creating dynamic systems that adapt, learn and improve over time.


The Power of Stepping Back: How Rest Fuels Leadership and Growth

It's essential to fully step back from work sometimes, especially when balancing the demands of running a business and being a parent. I find that I'm most energised and focused in the mornings, so I like to use that time to read, take notes, and reflect on different aspects of the business - whether it's strategy, growth, or new ideas. It's my creative time to think deeply and plan ahead. ... It's also important to carve out weekend days when I can fully switch off. This time away from the business helps me come back refreshed and with a clearer perspective. Even though I aim to disconnect, Lee (my husband and co-founder) and I often find ourselves discussing business because it's something we're both passionate about - strangely enough, those conversations don't feel like work. ... Stepping back from the day-to-day grind gave me the mental space to realise that while small tests have their place, they can sometimes limit your potential by encouraging cautious, safe moves. By contrast, thinking bigger and aiming for more ambitious goals has opened up a new level of creativity and opportunity. This shift in mindset has been a game-changer for us - it's unlocked several key growth areas, including new product opportunities and ways to engage with customers. 


Navigating the Future of Big Data for Business Success

Big data is no longer just a tool for competitive advantage – it has become the backbone of innovation and operational efficiency across key industries, driving billion-dollar transformations. ... The combination of artificial intelligence and big data, especially through machine learning (ML), is pushing the boundaries of what’s possible in data analysis. These technologies automate complex decision-making processes and uncover patterns that humans might miss. Google’s DeepMind AI, for instance, made a breakthrough in medical research by using data to predict protein folding, which is already speeding up drug discovery. ... Tech giants like Google and Facebook are increasing their data science teams by 20% annually, underscoring the essential role these experts play in unlocking actionable insights from vast datasets. This growing demand reflects the importance of data-driven decision-making across industries. ... AI and machine learning will also continue to revolutionize big data, playing a critical role in data-driven decision-making across industries. By 2025, AI is expected to generate $3.9 trillion in business value, with organizations leveraging these technologies to automate complex processes and extract valuable insights. 


Five Steps for Creating Responsible, Reliable, and Trustworthy AI

Model testing with human oversight is critically important. It allows data scientists to ensure the models they’ve built function as intended and root out any possible errors, anomalies, or biases. However, organizations should not rely solely on the acumen of their data scientists. Enlisting the input of business leaders who are close to the customers can help ensure that the models appropriately address customers’ needs. Being involved in the testing process also gives them a unique perspective that will allow them to explain the process to customers and alleviate their concerns.Be transparent Many organizations do not trust information from an opaque “black box.” They want to know how a model is trained and the methods it uses to craft its responses. Secrecy as to the model development and data computation processes will only serve to engender further skepticism in the model’s output. ... Continuous improvement might be the final step in creating trusted AI, but it’s just part of an ongoing process. Organizations must continue to capture, cultivate, and feed data into the model to keep it relevant. They must also consider customer feedback and recommendations on ways to improve their models. These steps form an essential foundation for trustworthy AI, but they’re not the only practices organizations should follow. 


With 'TPUXtract,' Attackers Can Steal Orgs' AI Models

The NCSU researchers used a Riscure EM probe station with a motorized XYZ table to scan the chip's surface, and a high sensitivity electromagnetic probe for capturing its weak radio signals. A Picoscope 6000E oscilloscope recorded the traces, Riscure's icWaves field-programmable gate array (FPGA) device aligned them in real-time, and the icWaves transceiver used bandpass filters and AM/FM demodulation to translate and filter out irrelevant signals. As tricky and costly as it may be for an individual hacker, Kurian says, "It can be a competing company who wants to do this, [and they could] in a matter of a few days. For example, a competitor wants to develop [a copy of] ChatGPT without doing all of the work. This is something that they can do to save a lot of money." Intellectual property theft, though, is just one potential reason anyone might want to steal an AI model. Malicious adversaries might also benefit from observing the knobs and dials controlling a popular AI model, so they can probe them for cybersecurity vulnerabilities. And for the especially ambitious, the researchers also cited four studies that focused on stealing regular neural network parameters. 


Artificial Intelligence Looms Large at Black Hat Europe

From a business standpoint, advances in AI are going to "make those predictions faster and faster, cheaper and cheaper," he said. Accordingly, "if I was in the business of security, I would try to make all of my problems prediction problems," so they could get solved by using prediction engines. What exactly these prediction problems might be remains an open question, although Zanero said other good use cases include analyzing code, and extracting information from unstructured text - for example, analyzing logs for cyberthreat intelligence purposes. "So it accelerates your investigation, but you still have to verify it," Moss said. "The verify part escapes most students," Zanero said. "I say that from experience." One verification challenge is AI often functions like a very complex, black box API, and people have to adapt their prompt to get the proper output, he said. The problem: that approach only works well when you know what the right answer should be, and can thus validate what the machine learning model is doing. "The real problematic areas in all machine learning - not just using LLMs - is what happens if you do not know the answer, and you try to get the model to give you knowledge that you didn't have before," Zanero said. "That's a deep area of research work."



Quote for the day:

"The only person you are destined to become is the person you decide to be." -- Ralph Waldo Emerson

Daily Tech Digest - December 13, 2024

The fintech revolution: How digital disruption is reshaping the future of banking

Several pivotal trends have converged to accelerate fintech adoption. The JAM trinity—Jan Dhan, Aadhaar, and Mobile—became the cornerstone of India’s fintech revolution, enabling seamless, paperless onboarding and verification for financial services. Aadhaar-enabled biometric authentication, for instance, has transformed how identity verification is conducted, making the process entirely mobile-based. Perhaps the Unified Payments Interface (UPI) is the most profound disruptor. Introduced by the Indian government as part of its push for a cashless economy, UPI has redefined peer-to-peer (P2P) and person-to-merchant (P2M) transactions. As of September 2024, UPI transactions have reached a staggering 15 billion per month, with transaction values surpassing INR 20.6 trillion, marking a 16x increase in volume and a 13x increase in value over five years. UPI’s convenience and speed have made it the default payment mode for millions, further marginalising the role of traditional banking infrastructure. At the same time, blockchain technology is emerging as a force that could dramatically reduce bank operational costs. Decentralised, secure, and transparent, blockchain allows financial institutions to overhaul their legacy systems. 


Bridging the AI Skills Gap: Top Strategies for IT Teams in 2025

Daly explained that practical applications are key to learning, and creating cross-functional teams that include AI experts can facilitate knowledge sharing and the practical application of new skills. "To prepare for 2025 and beyond, it's crucial to integrate AI and ML into the core business strategy beyond R&D investment or technical roles, but also into broader organizational talent development," she said. "This ensures all employees understand the opportunity [and] potential impact, and are trained on responsible use." ... Kayne McGladrey, IEEE senior member and field CISO at Hyperproof, said AI ethics skills are important because they ensure that AI systems are developed and used responsibly, aligning with ethical standards and societal values. "These skills help in identifying and mitigating biases, ensuring transparency, and maintaining accountability in AI operations," he explained. ... Scott Wheeler, cloud practice lead at Asperitas, said building a culture of innovation and continual learning is the first step in closing a skills gap, particularly for newer technologies like AI. "Provide access to learning resources, such as on-demand platforms like Coursera, Udemy, Wizlabs," he suggested. "Embed learning into IT projects by allocating time in the project schedule and monitor and adjust the various programs based on what works or doesn't work for your organization."


What Makes the Ideal Platform Engineer?

Platform engineers decide on a platform — consisting of many different tools, workflows and capabilities — that DevOps, developers and others in the business can use to develop and monitor the development of software. They base these decisions on what will work best for these users. ... The old adage that every business is unique applies here; platform engineering doesn’t look the same in every organization, nor do the platforms or portals that are used. But there are some key responsibilities that platform engineers will often have and skills that they require. Noam Brendel is a DevOps team lead at Checkmarx, an application security firm that has embraced platform engineering. He believes a platform engineer’s focus should be on improving developer excellence. “The perfect platform engineer helps developers by building systems that eliminate bottlenecks and increase collaboration,” he said. ... “Platform engineers need to have a strong understanding of how everything is connected and how the platform is built behind the scenes,” explained Zohar Einy, CEO of Port, a provider of open internal developer portals. He emphasized the importance of knowing how the company’s technical stack is structured and which development tools are used.


Biometrics and AI Knock Out Passwords in the Security Battle

Biometrics and AI-powered authentication have moved beyond concept to successful application. For instance, HSBC's Voice ID voice identification technology analyzes over 100 characteristics of an individual's voice, maintains a sample of the customer's voice, and compares it to the caller's voice. ... The success of implementing biometrics and AI into existing systems relies on organizations to follow best practices. Organizational leaders can assess organizational needs by conducting a security audit to identify vulnerabilities that biometrics and AI can address. This information is then used to create a roadmap for implementation considering budget, resources, and timelines. Involving appropriate staff in such discussions is essential so all stakeholders understand the factors considered in decision-making. Selecting the right technology calls for careful vendor evaluation and identification of solutions that align with the organization's requirements and compliance obligations. Once these decisions are solidified, it is prudent to use pilot programs to start the integration. Small-scale deployments test effectiveness and address any unforeseen issues before large-scale implementation.


CISA, Five Eyes issue hardening guidance for communications infrastructure

The joint guidance is in direct response to the breach of telecommunications infrastructure carried out by the Chinese government-linked hacking collective known as Salt Typhoon. ... “Although tailored to network defenders and engineers of communications infrastructure, this guide may also apply to organizations with on-premises enterprise equipment,” the guidance states. “The authoring agencies encourage telecommunications and other critical infrastructure organizations to apply the best practices in this guide.” “As of this release date,” the guidance says, “identified exploitations or compromises associated with these threat actors’ activity align with existing weaknesses associated with victim infrastructure; no novel activity has been observed. Patching vulnerable devices and services, as well as generally securing environments, will reduce opportunities for intrusion and mitigate the actors’ activity.” Visibility, a cornerstone of network defenses to monitoring, detecting, and understanding activities within their infrastructure, is pivotal in identifying potential threats, vulnerabilities, and anomalous behaviors before they escalate into significant security incidents.


Tackling software vulnerabilities with smarter developer strategies

No two developers solve a problem or build a software product the same way. Some arrive at their career through formal college education, while others are self-taught and with minimal mentorship. Styles and experiences vary wildly. Equally so, we should expect they will consider secure coding practices and guidelines with similar diversity of thought. Organizations must account for this wide diversity in its secure development practices – training, guidelines, standards. These may be foreign concepts to even a highly proficient developer, and we need to give our developers the time and space to learn and ask questions, with sufficient time to develop a secure coding proficiency. ... Best in class organizations have established ‘security champions’ programs where high-skilled developers are empowered to be a team-level resource for secure coding knowledge and best practice in order for institutional knowledge to spread. This is particularly important in remote environments where security teams may be unfamiliar or untrusted faces, and the internal development team leaders are all that much more important to set the tone and direction for adopting a security mindset and applying security principles.


Developing an AI platform for enhanced manufacturing efficiency

To power our AI Platform, we opted for a hybrid architecture that combines our on-premises infrastructure and cloud computing. The first objective was to promote agile development. The hybrid cloud environment, coupled with a microservices-based architecture and agile development methodologies, allowed us to rapidly iterate and deploy new features while maintaining robust security. The path for a microservices architecture arose from the need to flexibly respond to changes in services and libraries, and as part of this shift, our team also adopted a development method called "SCRUM" where we release features incrementally in short cycles of a few weeks, ultimately resulting in streamlined workflows.  ... The second objective is to use resources effectively. The manufacturing floor, where AI models are created, is now also facing strict cost efficiency requirements. With a hybrid cloud approach, we can use on-premises resources during normal operations and scale to the cloud during peak demand, thus reducing GPU usage costs and optimizing performance. This allows us to flexibly adapt to an expected increase in the number of users of AI Platform in the future, as well.


Privacy is a human right, and blockchain is critical to securing It

While blockchain offers decentralized and secure transactions, the lack of privacy on public blockchains can expose users to risks, from theft to persecution. In October, details emerged of one of the largest in-person crypto thefts in US history after a DC man was targeted when kidnappers were able to identify him as an early crypto investor. However, despite the case for on-chain privacy, it’s proven difficult to advance any real-world implementations. Along with the regulatory challenges faced by segments such as privacy coins and mixers, certain high-profile missteps have done little to advance the case for on-chain privacy. Worldcoin, Sam Altman’s much-touted crypto identity project that collected biometric data from users, has also failed to live up to exceptions due to, perversely, concerns from regulators about breaches of users’ data privacy. In August, the government of Kenya suspended Worldcoin’s operations following concerns about data security and consent practices. In October, the company announced it was pivoting away from the EU and towards Asian and Latin American markets, following regulatory wrangling over the European GDPR rules.


Transforming fragmented legacy controls at large banks

You’re not just talking about replacing certain components of a process with technology. There’s also a cost to this change. It’s not always on the top of the list when budgets come around. Usually, spend goes on areas that are revenue generating or more in the innovation space. It can be somewhat of a hard sell to the higher-ups as to why they would spend money to change something, and a lot of organisations aren’t great at articulating the business case for it. ... If you take operational resilience perspective, for example, that’s about being able to get your arms around your important business services, using regulatory language. Considering what is supporting them? What does it take to maintain them, keep them resilient and available, and recover them? The reality is that this used to be infinitely more straightforward. Most of the systems may have been in your own data centre in your own building. Now, the ecosystems that support most of these services are much more complex. You’ve obviously got cloud providers, SaaS providers, and third parties that you’ve outsourced to. You’ve also got a huge number of different services that, even if you’ve bought them and they’re in-house, there are a myriad of internal teams to navigate.


Why the Growing Adoption of IoT Demands Seamless Integration of IT and OT

Effective cybersecurity in OT environments requires a mix of skills and knowledge from both IT and OT teams. This includes professionals from IT infrastructure and cybersecurity, as well as control system engineers, field operations staff, and asset managers typically found in OT. ... The integration of IT and OT through advanced IoT protocols represents a major step forward in securing industrial and healthcare systems. However, this integration introduces significant challenges. I propose a new approach to IoT security that incorporates protocol-agnostic application layer security, lightweight cryptographic algorithms, dynamic key management, and end-to-end encryption, all based on zero-trust network architecture (ZTNA). ... In OT environments, remediation steps must go beyond traditional IT responses. While many IT security measures reset communication links and wipe volatile memory to prevent further compromise, additional processes are needed for identifying, classifying, and investigating cyber threats in OT systems. Furthermore, organizations can benefit from creating unified governance structures and cross-training programs that align the priorities of IT and OT teams. 



Quote for the day:

"There are three secrets to managing. The first secret is have patience. The second is be patient. And the third most important secret is patience." -- Chuck Tanner