Showing posts with label hybrid AI. Show all posts
Showing posts with label hybrid AI. Show all posts

Daily Tech Digest - November 16, 2025


Quote for the day:

"Life is 10% what happens to me and 90% of how I react to it." -- Charles Swindoll


Hybrid AI: The future of certifiable and trustworthy intelligence

An emerging approach in AI innovation is hybrid AI, which combines the scalability of machine learning (ML) with the constraint-checking and provenance of symbolic models. Hybrid AI forms a foundation for system-level certification and helps CIOs balance the pursuit of performance with the need for accountability. ... Clustering, a core unsupervised learning technique, organizes unlabeled data into groups based on similarity. It’s widely used to segment customers, group documents or analyze sensor data by measuring distances in a numeric feature space. But conventional clustering works on similarity alone and has no grasp of meaning. This can group items by coincidence rather than concept. ... For enterprise leaders, verifiability isn’t optional; it’s a governance requirement. Systems that support strategic or regulatory decisions must show constraint conformance and leave a traceable decision path. Ontology-driven clustering provides that foundation, creating an auditable chain of logic aligned with frameworks such as the NIST AI Risk Management Framework. In both government and industry, this hybrid approach makes AI more accountable and reliable. Trustworthiness is not a checkbox but an assurance case that connects data science, compliance and oversight. An organization that cannot trace what was allowed into a model or which constraints were applied does not truly control the decision.


Upwork study shows AI agents excel with human partners but fail independently

The research challenges both the hype around fully autonomous AI agents and fears that such technology will imminently replace knowledge workers. "AI agents aren't that agentic, meaning they aren't that good," Andrew Rabinovich, Upwork's chief technology officer and head of AI and machine learning, said in an exclusive interview with VentureBeat. "However, when paired with expert human professionals, project completion rates improve dramatically, supporting our firm belief that the future of work will be defined by humans and AI collaborating to get more work done, with human intuition and domain expertise playing a critical role." ... The research reveals stark differences in how AI agents perform with and without human guidance across different types of work. For data science and analytics projects, Claude Sonnet 4 achieved a 64% completion rate working alone but jumped to 93% after receiving feedback from a human expert. In sales and marketing work, Gemini 2.5 Pro's completion rate rose from 17% independently to 31% with human input. OpenAI's GPT-5 showed similarly dramatic improvements in engineering and architecture tasks, climbing from 30% to 50% completion. The pattern held across virtually all categories, with agents responding particularly well to human feedback on qualitative, creative work requiring editorial judgment — areas like writing, translation, and marketing — where completion rates increased by up to 17 percentage points per feedback cycle.


Debunking AI Security Myths for State and Local Governments

As state and local governments adopt AI, they must return to cybersecurity basics and strengthen core principles to help build resilience and earn public trust. For AI workloads, governments should apply zero-trust principles; for example, continuously verifying identities, limiting access by role and segmenting system components. Clear data policies for access, protection and backups help safeguard sensitive information and keep systems resilient. Perhaps most important, security teams need to be involved early in AI design conversations to build in security from the start. ... As state and local governments deploy more sophisticated AI systems, it’s crucial to view the technology as a partner, not a replacement for human intelligence. There is a misconception that advanced AI — particularly agentic AI, which can make its own decisions — eliminates the need for human oversight. The truth is, responsible AI deployment hinges on human oversight and strong governance. The more autonomous an AI system becomes, the more essential human governance is. ... Securing AI is not a one-time milestone. It’s an ongoing process of preparation and adaptation as the threat landscape evolves. For state and local governments advancing their AI initiatives, the path forward centers on building resilience and confidence. And the good news is, they don’t need to start from scratch. The tools and strategies already exist.


When Open Source Meets Enterprise: A Fragile Alliance

The answer is by no means simple; it is determined by a number of factors, of which the vendor’s ethos is one of the most important. Some vendors genuinely give back to the open-source communities from which they gain value. Others are more extractive, building closed proprietary layers atop open foundations and pushing little back to the community. The difference matters enormously. Organisations hold true optionality when a vendor actively maintains the open-source core, while keeping its proprietary features genuinely additive rather than substitutive. In theory, they could shift to another provider or take the open-source components in-house should the relationship sour. ... Commercial open-source vendors can provide training, certification, and managed services to fill this gap, for a fee naturally. Then there is innovation velocity. Open-source communities can move incredibly quickly, with contributions from numerous sources, enabling organisations to adopt cutting-edge features faster than conventional enterprise procurement cycles allow. Conversely, vital security patches can stall if a project lacks maintainers, creating unacceptable exposure for risk-averse organisations. ... Ultimately, the question is not whether open source should exist within the enterprise; that debate has been resolved. The challenge lies in thoughtfully incorporating open-source components into broader technology strategies that balance innovation, resilience, sovereignty, and pragmatic risk management.


The Hidden Cost of Technical Debt in Databases

At its core, technical debt represents the trade-off between speed and quality. When a development team chooses a “quick and dirty” path to meet a deadline, debt is incurred. The database world sees the same phenomenon. ... The first step to eliminating technical debt is recognition. DBAs must adopt a mindset that managing technical debt is part of the job. Although it can be enticing to quickly fix a problem and move on, it should always be a part of the job to reflect on the potential future impact of any change that is made. ... Importantly, DBAs also sit at the crossroads between technical staff and business stakeholders. They can explain how technical debt translates into business impact: lost productivity, slower application delivery, higher infrastructure costs, and greater operational risk. This ability to connect database health to business outcomes is essential for winning support to tackle debt. In practice, the DBA’s role involves three things: identification, communication, and advocacy. DBAs must identify where debt exists, communicate its impact clearly, and advocate for resources to remediate it. Sometimes that means lobbying for time to redesign a schema, other times it means convincing leadership that archiving inactive data will save more money than buying new storage. Yet other times it may involve championing a new tool or process to be put in place to automate required tasks to thwart technical debt.


Seek Skills, Not Titles

Titles feel good—at first. They make your resume and LinkedIn profile look prettier. But when you confuse your title for your identity, you’re setting yourself up for a rude awakening. Titles can be taken away. Or they just expire, like milk in the back of the fridge. Your skills, on the other hand? No one can take those away from you. ... Some roles taught me how to work hard and build trust. Some taught me to communicate clearly and adapt quickly. Others taught me to see the big picture and act decisively. The titles didn’t teach me those skills; the experience did. ... It’s easy to let your job title become your identity, especially when you’re leading at a high level. Everyone wants something from you. Board members, investors, employees. They project their version of who they think you should be. You must have clarity on your core values. Not the company’s core values, but your own. Otherwise, you’ll find yourself playing a dozen different roles without knowing which one is actually you. ... Don’t wait for the title to teach you a skill. Start now. The best way to grow is to pursue skills that will open up opportunities, especially the ones that align with your personal values. Because when your values and skills match, your impact multiplies, regardless of the title. When has pursuing a title led you away from the skills you truly needed? What impact have you seen when your skills are aligned with your values? How might you need to detour to get back on the right track?


Strategic Autarky for the AI Age

AI is still emerging. Overspecifying rules, enforcing rigid certification pathways, or creating sector wise chokepoints too early can stifle the very innovation we aim to promote. Burdensome compliance layers, mandated algorithmic disclosures, prescriptive model testing protocols, and fragmented approval processes can all create friction. Overregulation can discourage experimentation, elevate the cost of market entry, and drain our fastest growing startups. The risk is simple. Innovation flight. Loss of competitive edge. A domestic ecosystem slowed down before it reaches maturity. Balancing sovereignty and innovation, therefore, becomes the central task. India cannot afford to remain dependent, but it also cannot smother its own technological growth. India’s new AI Governance Framework addresses this balance directly. It follows seven guiding principles built around trust, accountability, transparency, privacy, security, human centricity, and collaboration. The standout feature is its “light touch” approach. Instead of imposing rigid controls, the framework sets high level principles that can evolve with technology. It relies on India’s existing legal foundation, including the Digital Personal Data Protection Act and the Information Technology Act, and is supported by institutional structures like the AI Governance Group and the AI Safety Institute. The framework contains several strong provisions. It encourages voluntary risk assessments rather than mandatory rigid audits for most systems.


Google Brain founder Andrew Ng thinks you should still learn to code - here's why

"Because AI coding has lowered the bar to entry so much, I hope we can encourage everyone to learn to code -- not just software engineers," Ng said during his keynote. How AI will impact jobs and the future of work is still unfolding. Regardless, Ng told ZDNET in an interview that he thinks everyone should know the basics of how to use AI to code, equivalent to knowing "a little bit of math," -- still a hard skill, but applied more generally to many careers for whatever you may need. "One of the most important skills of the future is the ability to tell a computer exactly what you want it to do for you," he said, noting that everyone should know enough to speak a computer's language, without needing to write code yourself. "Syntax, the arcane incantations we use, that's less important." ... The new challenge for developers, Ng said during the panel, will be coming up with the concept of what they want. Hedin agreed, adding that if AI is doing the coding in the future, developers should focus on their intuition when building a product or tool. "The thing that AI will be worst at is understanding humans," he said. ... He cited the overhiring sprees tech companies went on -- and then ultimately reversed -- during the COVID-19 pandemic as the primary reason entry-level coding jobs are hard to come by. Beyond that, though, it's a question of grads having the right kind of coding skills.


How Development Teams Are Rethinking the Way They Build Software

While low-code/no-code platforms accelerate development, they can become challenging when trying to achieve high levels of customization or when dealing with complex systems. Custom solutions might be more cost-effective for highly specialized applications. Low-code and no-code platforms must provide clear guidance to users within a structured framework to minimize mistakes, and they may offer less flexibility compared to traditional coding. AI tools can be easily used to generate code, suggest optimizations, or even create entire applications based on natural language prompts. However, they work best when integrated into a broader development ecosystem, not as standalone solutions. ... The future of software development appears to be a blended approach, where traditional programming, low-code/no-code platforms, and AI each play a role. The key to success in this dynamic landscape is understanding when to use each method, ensuring C-level executives, team leaders, and team members are versatile and leverage technology to enhance, rather than replace, human ingenuity. Let me share my firsthand experience. When I asked my developers a year ago how they thought using AI tools at work would evolve, many said: “I expect that as the tools improve, I’ll shift from mostly writing code to mostly reviewing AI-generated code.” Fast forward a year, and when we posed the same question, a common theme emerged: “We are spending less time writing the mundane stuff.”


Businesses must bolster cyber resilience, now more than ever

Cyber upskilling must be built into daily work for both technical and non-technical employees. It’s not a one-off training exercise; it’s part of how people perform their roles confidently and securely. For technical teams, staying current on certifications and practising hands-on defence is essential. Labs and sandboxes that simulate real-world attacks give them the experience needed to respond effectively when incidents happen. For everyone else, the focus should be on clarity and relevance. Employees need to understand exactly what’s expected of them; how their individual decisions contribute to the organisation’s resilience. ... Boards aren’t expected to manage technical defences, but they are responsible for ensuring the organisation can withstand, recover from, and learn after a cyber disruption. Cyber incidents have evolved into full business continuity events, affecting operations, supply chains, and reputation. Resilience should now sit alongside financial performance and sustainability as a core board KPI. That means directors receiving regular updates not only on threat trends and audit findings, but also on recovery readiness, incident transparency, and the cultural maturity of the organisation’s response. Re-engaging boards on this agenda isn’t about assigning blame—it’s about enabling smarter oversight. When leaders understand how resilience protects trust, continuity, and brand, cybersecurity stops being a technical issue and becomes what it truly is: a measure of business strength.

Daily Tech Digest - October 21, 2024

Choosing the Right Tech Stack: The Key to Successful App Development

Choosing the right tech stack is critical because the tech stack you opt to use will shape virtually every aspect of your development project. It determines which programming language you can use, as well as which modules, libraries, and other pre-built components you can take advantage of to speed development. It has implications for security, since some tech stacks are easier to secure than others. It influences the application performance and operating cost because it plays an important role in determining how many resources the application will consume. And so on. ... Building a secure application is important in any context. But if you face special compliance requirements — for example, if you're building a finance or healthcare app, which are subject to special compliance mandates in many places — you may need to guarantee an extra level of security. To that end, make sure the tech stack you choose offers whichever level of security controls you need to meet your compliance requirements. A tech stack alone won't guarantee that your app is compliant, but choosing the right tech stack makes it easier for you to build a compliant app.


What is hybrid AI?

Rather than relying on a single method, hybrid AI integrates various systems, such as rule-based symbolic reasoning, machine learning and deep learning, to create systems that can reason, learn, and adapt more effectively than AI systems that have not been integrated with others. ... Symbolic AI, which is often referred to as rule-based AI, focuses on using logic and explicit rules to solve problems. It excels in reasoning, structured data processing and interpretability but struggles with handling unstructured data or large-scale problems. Machine learning (ML), on the other hand, is data-driven and excels at pattern recognition and prediction. It works well when paired with large datasets, identifying trends without needing explicit rules. However, ML models are often difficult to interpret and may struggle with tasks requiring logical reasoning. Hybrid AI that combines symbolic AI with machine learning makes the most of the reasoning power of symbolic systems as well as the adaptability of machine learning. For instance, a system could use symbolic AI to follow medical guidelines for diagnosing a patient, while machine learning analyses patient records and test results to offer individual recommendations.


6 Roadblocks to IT innovation

Innovation doesn’t happen by happenstance, says Sean McCormack, a seasoned tech exec who has led innovation efforts at multiple companies. True, someone might have an idea that seemingly comes out of the blue, but that person needs a runway to turn that inspiration into innovation that takes flight. That runway is missing in a lot of organizations. “Oftentimes there’s no formal process or approach,” McCormack says. Consequently, inspired workers must try to muscle through their bright ideas as best they can; they often fail due to the lack of support and structure that would bring the money, sponsors, and skills needed to build and test it. “You have to be purposeful with how you approach innovation,” says McCormack, now CIO at First Student, North America’s largest provider of student transportation. ... Taking a purposeful approach enables innovation in several ways, McCormack explains. First, it prioritizes promising ideas and funnels resources to those ideas, not weaker proposals. It also ensures promising ideas get attention rather than be put on a back burner while everyone deals with day-to-day tasks. And it prevents turf wars between groups, so, for example, a business unit won’t run away with an innovation that IT proposed.


Cyber Criminals Hate Cybersecurity Awareness Month

In the world of enterprises, the expectations for restoring data and backing up data at multi-petabyte scale have changed. IT teams need to increase next-generation data protection capabilities, while reducing overall IT spending. It gets even more complicated when you consider all the applications, databases, and file systems that generate different types of workloads. No matter what, the business needs the right data at the right time. To deliver this consistency, the data needs to be secured. Next-generation data protection starts when the data lands in the storage array. There needs to be high reliability with 100% availability. There also needs to be data integrity. Each time data is accessed, the storage system should check and verify the data to ensure the highest degree of data integrity. Cyber resilience best practices require that you ensure data validity, as well as near-instantaneous recovery of primary storage and backup repositories, regardless of the size. This accelerates disaster recovery when a cyberattack happens. Greater awareness of best practices in cyber resilience would be one of the crowning achievements of this October as Cybersecurity Awareness Month. Let’s make it so.


6 Strategies for Maximizing Cloud Storage ROI

Rising expenses in cloud data storage have prompted many organizations to reconsider their strategies, leading to a trend of repatriation as enterprises seek more control during these unpredictable economic times. A February 2024 Citrix poll revealed that 94% of organizations had shifted some workloads back to on-premises systems, driven by concerns over security, performance, costs, and compatibility. ... Common tactics of re-architecting applications, managing cloud sprawl and monitoring spend using the tools each cloud provides are a great first start. However, these methods are not the full picture. Storage optimization is an integral piece. Focusing on cloud storage costs first is a smart strategy since storage constitutes a large chunk of the overall spend. More than half of IT organizations (55%) will spend more than 30% of their IT budget on data storage and backup technology, according to our recent State of Unstructured Data Management report. The reality is that most organizations don’t have a clear idea on current and predicted storage costs. They do not know how to economize, how much data they have, or where it resides. 


As Software Code Proliferates, Security Debt Becomes a More Serious Threat

As AI-generated code proliferates, it compounds an already common problem, filling code bases with insecure code that will likely become security debt, increasing the risks to organizations. Just like financial debt, security debt can accrue quickly over time, the result of organizations compromising security measures in favor of convenience, speed or cost-cutting measures. Security debt, introduced by both first-party and third-party code, affects organizations of all sizes. More than 70% of organizations have security debt ingrained in their systems — and nearly half have critical debt. Over time, this accumulated debt poses serious risks because, as with financial debt, the bill will become due — potentially in the form of costly and consequential security breaches that can put an organization's data, reputation and overall stability at stake. ... Amid the dark clouds gathering over security debt, there is one silver lining. The number of high-severity flaws in organizations has been cut in half since 2016, which is clear evidence that organizations have made some progress in implementing secure software practices. It also demonstrates the tangible impact of quickly remediating critical security debt.


Why Liability Should Steer Compliance with the Cyber Security and Resilience Bill

First and foremost, the regulations are likely to involve an overhaul that will require a management focus. In the case of NIS2, for example, the board is tasked with taking responsibility for and maintaining oversight of the risk management strategy. This will require management bodies to undergo training themselves as well as to arrange training for their employees in order to equip themselves with sufficient knowledge and skills to identify risks and assess cybersecurity risk management practices. Yet NIS2 also breaks new ground in that it not only places responsibility for oversight of the risk strategy firmly at the feet of the board but goes on to state individuals could be held personally liable if they fail to exercise those responsibilities. Under article 32, authorities can temporarily prohibit any person responsible for discharging managerial responsibilities at CEO or a similar level from exercising managerial functions – in other words they can be suspended from office. We don’t know if the Cyber Security and Resilience Bill will take a similar tack but NIS2 is by no means alone in this approach. 


Tackling operational challenges in modern data centers

Supply chain bottlenecks continue to plague data centers, as shortages of critical components and materials lead to delays in shipping, sliding project timelines, and increased costs for customers. Many data center operators have become unable to meet their need for affected equipment such as generators, UPS batteries, transformers, servers, building materials, and other big-ticket items. This gap in availability is leading many to settle for any readily available items, even if not from their preferred vendor. ... The continuous heavy power consumption of data centers can strain local electrical utility systems with limited supply or transmission capacity. This poses a question of whether areas heavily populated with data centers, like Northern Virginia, Columbus, and Pittsburgh, have enough electricity capacity, and if they should only be permitted to use a certain percentage of grid power. ... Like the rest of the world, data centers are now facing a climate crisis as temperatures and weather events soar. Data centers are also seeking ways to increase their power load and serve higher client demand, without significantly increasing their electricity and emissions burdens. 


The AI-driven capabilities transforming the supply chain

In today’s supply chain environment, there really is no room for disruption — be it labor shortages, geopolitical strife or malfunctions within manufacturing. To keep up with demand, supply chain teams are focused on continuous improvement and finding ways to remove the burden on expensive manual labor in favor of automated, digital solutions. When faulty products come off the production line, it must be addressed quickly. AI can accelerate the resolution process faster than human labor in many instances — preventing production standstills and even catching errors before they occur. Engineers who are creating a product can lean on these insights too, using AI to assess all the errors that have happened in the past to make sure that they don’t happen in the future. ... Through camera footage and visual inspections, AI models can help detect errors, faults or defects in equipment before they happen. If the technology identifies an issue — or predicts the need for maintenance — teams can arrange for a technician to perform repairs. This predictive maintenance minimizes unplanned outages, reduces disruptions across the supply chain and optimizes asset performance.


What makes a great CISO

Security settings were once viewed as binary — on or off — but today, security programs need to be designed to help organizations adapt and respond with minimal impact when incidents occur. Response and resilience planning now involves cybersecurity and business operations teams, requiring the CISO to engage across the organization, especially during incidents. ... In the past, those with a SecOps background often focused on operational security, while those with a GRC background leaned toward prioritizing compliance to manage risk, according to Paul Connelly, former CISO now board advisor, independent director and CISO mentor. “Infosec requires a base competence in technology, but a CISO doesn’t have to be an engineer or developer,” says Connelly. A broad understanding of infosec responsibilities is needed, but the CISO can come from any part of the team, including IT or even internal audit. Exposure to different industries and companies brings a valuable diversity of thinking. Above all, modern CISOs must prioritize aligning security efforts with business objectives. “Individuals who have zig-zagged through an organization, getting wide exposure, are better prepared than someone who rose through the ranks focused in SecOps or another single area of focus,” says Connelly.



Quote for the day:

"The great leaders are like best conductors. They reach beyond the notes to reach the magic in the players." -- Blaine Lee