Showing posts with label data fabric. Show all posts
Showing posts with label data fabric. Show all posts

Daily Tech Digest - August 31, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson



A Brief History of GPT Through Papers

The first neural network based language translation models operated in three steps (at a high level). An encoder would embed the “source statement” into a vector space, resulting in a “source vector”. Then, the source vector would be mapped to a “target vector” through a neural network and finally a decoder would map the resulting vector to the “target statement”. People quickly realized that the vector that was supposed to encode the source statement had too much responsibility. The source statement could be arbitrarily long. So, instead of a single vector for the entire statement, let’s convert each word into a vector and then have an intermediate element that would pick out the specific words that the decoder should focus more on. ... The mechanism by which the words were converted to vectors was based on recurrent neural networks (RNNs). Details of this can be obtained from the paper itself. These recurrent neural networks relied on hidden states to encode the past information of the sequence. While it’s convenient to have all that information encoded into a single vector, it’s not good for parallelizability since that vector becomes a bottleneck and must be computed before the rest of the sentence can be processed. ... The idea is to give the model demonstrative examples at inference time as opposed to using them to train its parameters. If no such examples are provided in-context, it is called “zero shot”. If one example is provided, “one shot” and if a few are provided, “few shot”.


8 Powerful Lessons from Robert Herjavec at Entrepreneur Level Up That Every Founder Needs to Hear

Entrepreneurs who remain curious — asking questions and seeking insights — often discover pathways others overlook. Instead of dismissing a "no" or a difficult response, Herjavec urged attendees to look for the opportunity behind it. Sometimes, the follow-up question or the willingness to listen more deeply is what transforms rejection into possibility. ... while breakthrough innovations capture headlines, the majority of sustainable businesses are built on incremental improvements, better execution and adapting existing ideas to new markets. For entrepreneurs, this means it's okay if your business doesn't feel revolutionary from day one. What matters is staying committed to evolving, improving and listening to the market. ... setbacks are inevitable in entrepreneurship. The real test isn't whether you'll face challenges, but how you respond to them. Entrepreneurs who can adapt — whether by shifting strategy, reinventing a product or rethinking how they serve customers — are the ones who endure. ... when leaders lose focus, passion or clarity, the organization inevitably follows. A founder's vision and energy cascade down into the culture, decision-making and execution. If leaders drift, so does the company. For entrepreneurs, this is a call to self-reflection. Protect your clarity of purpose. Revisit why you started. And remember that your team looks to you not just for direction, but for inspiration. 


The era of cheap AI coding assistants may be over

Developers have taken to social media platforms and GitHub to express their dissatisfaction over the pricing changes, especially across tools like Claude Code, Kiro, and Cursor, but vendors have not adjusted pricing or made any changes that significantly reduce credits consumption. Analysts don’t see any alternative to reducing the pricing of these tools. "There’s really no alternative until someone figures out the following: how to use cheaper but dumber models than Claude Sonnet 4 to achieve the same user experience and innovate on KVCache hit rate to reduce the effective price per dollar,” said Wei Zhou, head of AI utility research at SemiAnalysis. Considering the market conditions, CIOs and their enterprises need to start absorbing the cost and treat vibe coding tools as a productivity expense, according to Futurum’s Hinchcliffe. “CIOs should start allocating more budgets for vibe coding tools, just as they would do for SaaS, cloud storage, collaboration tools or any other line items,” Hinchcliffe said. “The case of ROI on these tools is still strong: faster shipping, fewer errors, and higher developer throughput. Additionally, a good developer costs six figures annually, while vibe coding tools are still priced in the low-to-mid thousands per seat,” Hinchcliffe added. ... “Configuring assistants to intervene only where value is highest and choosing smaller, faster models for common tasks and saving large-model calls for edge cases could bring down expenditure,” Hinchcliffe added.


AI agents need intent-based blockchain infrastructure

By integrating agents with intent-centric systems, however, we can ensure users fully control their data and assets. Intents are a type of building block for decentralized applications that give users complete control over the outcome of their transactions. Powered by a decentralized network of solvers, agentic nodes that compete to solve user transactions, these systems eliminate the complexity of the blockchain experience while maintaining user sovereignty and privacy throughout the process. ... Combining AI agents and intents will redefine the Web3 experience while keeping the space true to its core values. Intents bridge users and agents, ensuring the UX benefits users expect from AI while maintaining decentralization, sovereignty and verifiability. Intent-based systems will play a crucial role in the next phase of Web3’s evolution by ensuring agents act in users’ best interests. As AI adoption grows, so does the risk of replicating the problems of Web2 within Web3. Intent-centric infrastructure is the key to addressing both the challenges and opportunities that AI agents bring and is necessary to unlock their full potential. Intents will be an essential infrastructure component and a fundamental requirement for anyone integrating or considering integrating AI into DeFi. Intents are not merely a type of UX upgrade or optional enhancement. 


The future of software development: To what can AI replace human developers?

Rather than replacing developers, AI is transforming them into higher-level orchestrators of technology. The emerging model is one of human-AI collaboration, where machines handle the repetitive scaffolding and humans focus on design, strategy, and oversight. In this new world, developers must learn not just to write code, but to guide, prompt, and supervise AI systems. The skillset is expanding from syntax and logic to include abstraction, ethical reasoning, systems thinking, and interdisciplinary collaboration. In other words, AI is not making developers obsolete. It is making new demands on their expertise. ... This shift has significant implications for how we educate the next generation of software professionals. Beyond coding languages, students will need to understand how to evaluate AI- AI-generated output, how to embed ethical standards into automated systems, and how to lead hybrid teams made up of both humans and machines. It also affects how organisations hire and manage talent. Companies must rethink job descriptions, career paths, and performance metrics to account for the impact of AI-enabled development. Leaders must focus on AI literacy, not just technical competence. Professionals seeking to stay ahead of the curve can explore free programs, such as The Future of Software Engineering Led by Emerging Technologies, which introduces the evolving role of AI in modern software development.


Open Data Fabric: Rethinking Data Architecture for AI at Scale

The first principle, unified data access, ensures that agents have federated real-time access across all enterprise data sources without requiring pipelines, data movement, or duplication. Unlike human users who typically work within specific business domains, agents often need to correlate information across the entire enterprise to generate accurate insights. ... The second principle, unified contextual intelligence, involves providing agents with the business and technical understanding to interpret data correctly. This goes far beyond traditional metadata management to include business definitions, domain knowledge, usage patterns, and quality indicators from across the enterprise ecosystem. Effective contextual intelligence aggregates information from metadata, data catalogs, business glossaries, business intelligence tools, and tribal knowledge into a unified layer that agents can access in real-time.  ... Perhaps the most significant principle involves establishing collaborative self-service. This is a significant shift as it means moving from static dashboards and reports to dynamic, collaborative data products and insights that agents can generate and share with each other. The results are trusted “data answers,” or conversational, on-demand data products for the age of AI that include not just query results but also the business context, methodology, lineage, and reasoning that went into generating them.


A Simple Shift in Light Control Could Revolutionize Quantum Computing

A research collaboration led by Vikas Remesh of the Photonics Group at the Department of Experimental Physics, University of Innsbruck, together with partners from the University of Cambridge, Johannes Kepler University Linz, and other institutions, has now demonstrated a way to bypass these challenges. Their method relies on a fully optical process known as stimulated two-photon excitation. This technique allows quantum dots to emit streams of photons in distinct polarization states without the need for electronic switching hardware. In tests, the researchers successfully produced high-quality two-photon states while maintaining excellent single-photon characteristics. ... “The method works by first exciting the quantum dot with precisely timed laser pulses to create a biexciton state, followed by polarization-controlled stimulation pulses that deterministically trigger photon emission in the desired polarization,” explain Yusuf Karli and Iker Avila Arenas, the study’s first authors. ... “What makes this approach particularly elegant is that we have moved the complexity from expensive, loss-inducing electronic components after the single photon emission to the optical excitation stage, and it is a significant step forward in making quantum dot sources more practical for real-world applications,” notes Vikas Remesh, the study’s lead researcher.


AI and the New Rules of Observability

The gap between "monitoring" and true observability is both cultural and technological. Enterprises haven't matured beyond monitoring because old tools weren't built for modern systems, and organizational cultures have been slow to evolve toward proactive, shared ownership of reliability. ... One blind spot is model drift, which occurs when data shifts, rendering its assumptions invalid. In 2016, Microsoft's Tay chatbot was a notable failure due to its exposure to shifting user data distributions. Infrastructure monitoring showed uptime was fine; only semantic observability of outputs would have flagged the model's drift into toxic behavior. Hidden technical debt or unseen complexity in code can undermine observability. In machine learning, or ML, systems, pipelines often fail silently, while retraining processes, feature pipelines and feedback loops create fragile dependencies that traditional monitoring tools may overlook. Another issue is "opacity of predictions." ... AI models often learn from human-curated priorities. If ops teams historically emphasized CPU or network metrics, the AI may overweigh those signals while downplaying emerging, equally critical patterns - for example, memory leaks or service-to-service latency. This can occur as bias amplification, where the model becomes biased toward "legacy priorities" and blind to novel failure modes. Bias often mirrors reality.


Dynamic Integration for AI Agents – Part 1

An integration of components within AI differs from an integration between AI agents. The former relates to integration with known entities that form a deterministic model of information flow. The same relates to inter-application, inter-system and inter-service transactions required by a business process at large. It is based on mapping of business functionality and information (an architecture of the business in organisations) onto available IT systems, applications, and services. The latter shifts the integration paradigm since the very AI Agents decide that they need to integrate with something at runtime based on the overlapping of the statistical LLM and available information, which contains linguistic ties unknown even in the LLM training. That is, an AI Agent does not know what a counterpart — an application, another AI Agent or data source — it would need to cooperate with to solve the overall task given to it by its consumer/user. The AI Agent does not know even if the needed counterpart exists. ... Any AI Agent may have its individual owner and provider. These owners and providers may be unaware of each others and act independently when creating their AI Agents. No AI Agent can be self-sufficient due to its fundamental design — it depends on the prompts and real-world data at runtime. It seems that the approaches to integration and the integration solutions differ for the humanitarian and natural science spheres.


Counteracting Cyber Complacency: 6 Security Blind Spots for Credit Unions

Organizations that conduct only basic vendor vetting lack visibility into the cybersecurity practices of their vendors’ subcontractors. This creates gaps in oversight that attackers can exploit to gain access to an institution’s data. Third-party providers often have direct access to critical systems, making them an attractive target. When they’re compromised, the consequences quickly extend to the credit unions they serve. ... Cybercriminals continue to exploit employee behavior as a primary entry point into financial institutions. Social engineering tactics — such as phishing, vishing, and impersonation — bypass technical safeguards by manipulating people. These attacks rely on trust, familiarity, or urgency to provoke an action that grants the attacker access to credentials, systems, or internal data. ... Many credit unions deliver cybersecurity training on an annual schedule or only during onboarding. These programs often lack depth, fail to differentiate between job functions, and lose effectiveness over time. When training is overly broad or infrequent, staff and leadership alike may be unprepared to recognize or respond to threats. The risk is heightened when the threats are evolving faster than the curriculum. TruStage advises tailoring cyber education to the institution’s structure and risk profile. Frontline staff who manage member accounts face different risks than board members or vendors. 

Daily Tech Digest - June 17, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley



Understanding how data fabric enhances data security and governance

“The biggest challenge is fragmentation; most enterprises operate across multiple cloud environments, each with its own security model, making unified governance incredibly complex,” Dipankar Sengupta, CEO of Digital Engineering Services at Sutherland Global told InfoWorld. ... Shadow IT is also a persistent threat and challenge. According to Sengupta, some enterprises discover nearly 40% of their data exists outside governed environments. Proactively discovering and onboarding those data sources has become non-negotiable. ... A data fabric deepens organizations’ understanding and control of their data and consumption patterns. “With this deeper understanding, organizations can easily detect sensitive data and workloads in potential violation of GDPR, CCPA, HIPAA and similar regulations,” Calvesbert commented. “With deeper control, organizations can then apply the necessary data governance and security measures in near real time to remain compliant.” ... Data security and governance inside a data fabric shouldn’t just be about controlling access to data, it should also come with some form of data validation. The cliched saying “garbage-in, garbage-out” is all too true when it comes to data. After all, what’s the point of ensuring security and governance on data that isn’t valid in the first place?


AI isn’t taking your job; the big threat is a growing skills gap

While AI can boost productivity by handling routine tasks, it can’t replace the strategic roles filled by skilled professionals, Vianello said. To avoid those kinds of issues, agencies — just like companies — need to invest in adaptable, mission-ready teams with continuously updated skills in cloud, cyber, and AI. The technology, he said, should augment – not replace — human teams, automating repetitive tasks while enhancing strategic work. Success in high-demand tech careers starts with in-demand certifications, real-world experience, and soft skills. Ultimately, high-performing teams are built through agile, continuous training that evolves with the tech, Vianello said. “We train teams to use AI platforms like Copilot, Claude and ChatGPT to accelerate productivity,” Vianello said. “But we don’t stop at tools; we build ‘human-in-the-loop’ systems where AI augments decision-making and humans maintain oversight. That’s how you scale trust, performance, and ethics in parallel.” High-performing teams aren’t born with AI expertise; they’re built through continuous, role-specific, forward-looking education, he said, adding that preparing a workforce for AI is not about “chasing” the next hottest skill. “It’s about building a training engine that adapts as fast as technology evolves,” he said.


Got a new password manager? Don't leave your old logins exposed in the cloud - do this next

Those built-in utilities might have been good enough for an earlier era, but they aren't good enough for our complex, multi-platform world. For most people, the correct option is to switch to a third-party password manager and shut down all those built-in password features in the browsers and mobile devices you use. Why? Third-party password managers are built to work everywhere, with a full set of features that are the same (or nearly so) across every device. After you make that switch, the passwords you saved previously are left behind in a cloud service you no longer use. If you regularly switch between browsers (Chrome on your Mac or Windows PC, Safari on your iPhone), you might even have multiple sets of saved passwords scattered across multiple clouds. It's time to clean up that mess. If you're no longer using a password manager, it's prudent to track down those outdated saved passwords and delete them from the cloud. I've studied each of the four leading browsers: Google Chrome, Apple's Safari, Microsoft Edge, and Mozilla Firefox. Here's how to find the password management settings for each one, export any saved passwords to a safe place, and then turn off the feature. As a final step, I explain how to purge saved passwords and stop syncing.


AI and technical debt: A Computer Weekly Downtime Upload podcast

Given that GenAI technology hit the mainstream with GPT 4 two years ago, Reed says: “It was like nothing ever before.” And while the word “transformational” tends to be generously overused in technology he describes generative AI as “transformational with a capital T.” But transformations are not instant and businesses need to understand how to apply GenAI most effectively, and figure out where it does and does not work well. “Every time you hear anything with generative AI, you hear the word journey and we're no different,” he says. “We are trying to understand it. We're trying to understand its capabilities and understand our place with generative AI,” Reed adds. Early adopters are keen to understand how to use GenAI in day-to-day work, which, he says, can range from being an AI-based work assistant or a tool that changes the way people search for information to using AI as a gateway to the heavy lifting required in many organisations. He points out that bet365 is no different. “We have a sliding scale of ambition, but obviously like anything we do in an organisation of this size, it must be measured, it must be understood and we do need to be very, very clear what we're using generative AI for.” One of the very clear use cases for GenAI is in software development. 


Cloud Exodus: When to Know It's Time to Repatriate Your Workloads

Because of the inherent scalability of cloud resources, the cloud makes a lot of sense when the compute, storage, and other resources your business needs fluctuate constantly in volume. But if you find that your resource consumption is virtually unchanged from month to month or year to year, you may not need the cloud. You may be able to spend less and enjoy more control by deploying on-prem infrastructure. ... Cloud costs will naturally fluctuate over time due to changes in resource consumption levels. It's normal if cost increases correlate with usage increases. What's concerning, however, is a spike in cloud costs that you can't tie to consumption changes. It's likely in that case that you're spending more either because your cloud service provider raised its prices or your cloud environment is not optimized from a cost perspective. ... You can reduce latency (meaning the delay between when a user requests data on the network and when it arrives) on cloud platforms by choosing cloud regions that are geographically proximate to your end users. But that only works if your users are concentrated in certain areas, and if cloud data centers are available close to them. If this is not the case, you are likely to run into latency issues, which could dampen the user experience you deliver. 


The future of data center networking and processing

The optical-to-electrical conversion that is performed by the optical transceiver is still needed in a CPO system, but it moves from a pluggable module located at the faceplate of the switching equipment to a small chip (or chiplet) that is co-packaged very closely to the target ICs inside the box. Data center chipset heavyweights Broadcom and Nvidia have both announced CPO-based data center networking products operating at 51.2 and 102.4 Tb/s. ... Early generation CPO systems, such as those announced by Broadcom and Nvidia for Ethernet switching, make use of high channel count fiber array units (FAUs) that are designed to precisely align the fiber cores to their corresponding waveguides inside the PICs. These FAUs are challenging to make as they require high fiber counts, mixed single-mode (SM) and polarization maintaining (PM) fibers, integration of micro-optic components depending on the fiber-to-chip coupling mechanism, highly precise tolerance alignments, CPO-optimized fibers and multiple connector assemblies.  ... In addition to scale and cost benefits, extreme densities can be achieved at the edge of the PIC by bringing the waveguides very close together, down to about 30µm, which is far more than what can be achieved with even the thinnest fibers. Next generation fiber-to-chip coupling will enable GPU optics – which will require unprecedented levels of density and scale.


Align AI with Data, Analytics and Governance to Drive Intelligent, Adaptive Decisions and Actions Across the Organisation

Unlocking AI’s full business potential requires building executive AI literacy. They must be educated on AI opportunities, risks and costs to make effective, future-ready decisions on AI investments that accelerate organisational outcomes. Gartner recommends D&A leaders introduce experiential upskilling programs for executives, such as developing domain-specific prototypes to make AI tangible. This will lead to greater and more appropriate investment in AI capabilities. ... Using synthetic data to train AI models is now a critical strategy for enhancing privacy and generating diverse datasets. However, complexities arise from the need to ensure synthetic data accurately represents real-world scenarios, scales effectively to meet growing data demand and integrates seamlessly with existing data pipelines and systems. “To manage these risks, organisations need effective metadata management,” said Idoine. “Metadata provides the context, lineage and governance needed to track, verify and manage synthetic data responsibly, which is essential to maintaining AI accuracy and meeting compliance standards.” ... Building GenAI models in-house offers flexibility, control and long-term value that many packaged tools cannot match. As internal capabilities grow, Gartner recommends organisations adopt a clear framework for build versus buy decisions. 


Do microServices' Benefits Supersede Their caveats? A Conversation With Sam Newman

A microservice is one of those where it is independently deployable so I can make a change to it and I can roll out new versions of it without having to change any other part of my system. So things like avoiding shared databases are really about achieving that independent deployability. And it's a really simple idea that can be quite easy to implement if you know about it from the beginning. It can be difficult to implement if you're already in a tangled mess. And that idea of independent deployability has interesting benefits because the fact that something is independently deployable is obviously useful because it's low impact releases, but there's loads of other benefits that start to flow from that. ... The vast majority of people who tell me they've scaling issues often don't have them. They could solve their scaling issues with a monolith, no problem at all, and it would be a more straightforward solution. They're typically organizational scale issues. And so, for me, what the world needs from our IT's product-focused, outcome-oriented, and more autonomous teams. That's what we need, and microservices are an enabler for that. Having things like team topologies, which of course, although the DevOps topology stuff was happening around the time of my first edition of my book, that being kind of moved into the team topology space by Matthew and Manuel around the second edition again sort of helps kind of crystallize a lot of those concepts as well.


Why Businesses Must Upgrade to an AI-First Connected GRC System

Adopting a connected GRC solution enables organizations to move beyond siloed operations by bringing risk and compliance functions onto a single, integrated platform. It also creates a unified view of risks and controls across departments, bringing better workflows and encouraging collaboration. With centralized data and shared visibility, managing complex, interconnected risks becomes far more efficient and proactive. In fact, this shift toward integration reflects a broader trend that is seen in the India Regulatory Technology Business Report 2024–2029 findings, which highlight the growing adoption of compliance automation, AI, and machine learning in the Indian market. The report points to a future where GRC is driven by data, merging operations, technology, and control into a single, intelligent framework. ... An AI-first, connected GRC solution takes the heavy lifting out of compliance. Instead of juggling disconnected systems and endless updates, it brings everything together, from tracking regulations to automating actions to keeping teams aligned. For compliance teams, that means less manual work and more time to focus on what matters. ... A smart, integrated GRC solution brings everything into one place. It helps organizations run more smoothly by reducing errors and simplifying teamwork. It also means less time spent on admin and better use of people and resources where they are really needed.


The Importance of Information Sharing to Achieve Cybersecurity Resilience

Information sharing among different sectors predominantly revolves around threats related to phishing, vulnerabilities, ransomware, and data breaches. Each sector tailors its approach to cybersecurity information sharing based on regulatory and technological needs, carefully considering strategies that address specific risks and identify resolution requirements. However, for the mobile industry, information sharing relating to cyberattacks on the networks themselves and misuse of interconnection signalling are also the focus of significant sharing efforts. Industries learn from each other by adopting sector-specific frameworks and leveraging real-time data to enhance their cybersecurity posture. This includes real-time sharing of indicators of compromise (IoCs) and the techniques, tactics, and procedures (TTPs) associated with phishing campaigns. An example of this is the recently launched Stop Scams UK initiative, bringing together tech, telecoms and finance industry leaders, who are going to share real-time data on fraud indicators to enhance consumer protection and foster economic security. This is an important development, as without cross-industry information sharing, determining whether a cybersecurity attack campaign is sector-specific or indiscriminate becomes difficult. 

Daily Tech Digest - April 21, 2025


Quote for the day:

"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine



Two ways AI hype is worsening the cybersecurity skills crisis

Another critical factor in the AI-skills shortage discussion is that attackers are also leveraging AI, putting defenders at an even greater disadvantage. Cybercriminals are using AI to generate more convincing phishing emails, automate reconnaissance, and develop malware that can evade detection. Meanwhile, security teams are struggling just to keep up. “AI exacerbates what’s already going on at an accelerated pace,” says Rona Spiegel, cyber risk advisor at GroScale and former cloud governance leader at Wells Fargo and Cisco. “In cybersecurity, the defenders have to be right all the time, while attackers only have to be right once. AI is increasing the probability of attackers getting it right more often.” ... “CISOs will have to be more tactical in their approach,” she explains. “There’s so much pressure for them to automate, automate, automate. I think it would be best if they could partner cross-functionality and focus on things like policy and urge the unification and simplification of how polices are adapted… and make sure how we’re educating the entire environment, the entire workforce, not just the cybersecurity.” Appayanna echoes this sentiment, arguing that when used correctly, AI can ease talent shortages rather than exacerbate them. 


Data mesh vs. data fabric vs. data virtualization: There’s a difference

“Data mesh is a decentralized model for data, where domain experts like product engineers or LLM specialists control and manage their own data,” says Ahsan Farooqi, global head of data and analytics, Orion Innovation. While data mesh is tied to certain underlying technologies, it’s really a shift in thinking more than anything else. In an organization that has embraced data mesh architecture, domain-specific data is treated as a product owned by the teams relevant to those domains. ... As Matt Williams, field CTO at Cornelis Networks, puts it, “Data fabric is an architecture and set of data services that provides intelligent, real-time access to data — regardless of where it lives — across on-prem, cloud, hybrid, and edge environments. This is the architecture of choice for large data centers across multiple applications.” ... Data virtualization is the secret sauce that can make that happen. “Data virtualization is a technology layer that allows you to create a unified view of data across multiple systems and allows the user to access, query, and analyze data without physically moving or copying it,” says Williams. That means you don’ t have to worry about reconciling different data stores or working with data that’s outdated. Data fabric uses data virtualization to produce that single pane of glass: It allows the user to see data as a unified set, even if that’s not the underlying physical reality.


Biometrics adoption strategies benefit when government direction is clear

Part of the problem seems to be the collision of private and public sector interests in digital ID use cases like right-to-work checks. They would fall outside the original conception of Gov.uk as a system exclusively for public sector interaction, but the business benefit they provide is strictly one of compliance. The UK government’s Office for Digital Identities and Attributes (OfDIA), meanwhile, brought the register of digital identity and attribute services to the public beta stage earlier this month. The register lists services certified to the digital identity and attributes trust framework to perform such compliance checks, and the recent addition of Gov.uk One Login provided the spark for the current industry conflagration. Age checks for access to online pornography in France now require a “double-blind” architecture to protect user privacy. The additional complexity still leaves clear roles, however, which VerifyMy and IDxLAB have partnered to fill. Yoti has signed up a French pay site, but at least one big international player would rather fight the age assurance rules in court. Aviation and border management is one area where the enforcement of regulations has benefited from private sector innovation. Preparation for Digital Travel Credentials is underway with Amadeus pitching its “journey pass” as a way to use biometrics at each touchpoint as part of a reimagined traveller experience. 



Will AI replace software engineers? It depends on who you ask

Effective software development requires "deep collaboration with other stakeholders, including researchers, designers, and product managers, who are all giving input, often in real time," said Callery-Colyne. "Dialogues around nuanced product and user information will occur, and that context must be infused into creating better code, which is something AI simply cannot do." The area where AIs and agents have been successful so far, "is that they don't work with customers directly, but instead assist the most expensive part of any IT, the programmers and software engineers," Thurai pointed out. "While the accuracy has improved over the years, Gen AI is still not 100% accurate. But based on my conversations with many enterprise developers, the technology cuts down coding time tremendously. This is especially true for junior to mid-senior level developers." AI software agents may be most helpful "when developers are racing against time during a major incident, to roll out a fixed code quickly, and have the systems back up and running," Thurai added. "But if the code is deployed in production as is, then it adds to tech debt and could eventually make the situation worse over the years, many incidents later."


Protected NHIs: Key to Cyber Resilience

We live where cyber threats is continually evolving. Cyber attackers are getting smarter and more sophisticated with their techniques. Traditional security measures no longer suffice. NHIs can be the critical game-changer that organizations have been looking for. So, why is this the case? Well, cyber attackers, in the current times, are not just targeting humans but machines as well. Remember that your IT includes computing resources like servers, applications, and services that all represent potential points of attack. Non-Human Identities have bridged the gap between human identities and machine identities, providing an added layer of protection. NHIs security is of utmost importance as these identities can have overarching permissions. One single mishap with an NHI can lead to severe consequences. ... Businesses are significantly relying on cloud-based services for a wide range of purposes, from storage solutions to sophisticated applications. That said, the increasing dependency on the cloud has elucidated the pressing need for more robust and sophisticated security protocols. An NHI management strategy substantially supports this quest for fortified cloud security. By integrating with your cloud services, NHIs ensure secured access, moderated control, and streamlined data exchanges, all of which are instrumental in the prevention of unauthorized accesses and data violations.


Job seekers using genAI to fake skills and credentials

“We’re seeing this a lot with our tech hires, and a lot of the sentence structure and overuse of buzzwords is making it super obvious,” said Joel Wolfe, president of HiredSupport, a California-based business process outsourcing (BPO) company. HiredSupport has more than 100 corporate clients globally, including companies in the eCommerce, SaaS, healthcare, and fintech sectors. Wolfe, who weighed in on the topic on LinkedIn, said he’s seeing AI-enhanced resumes “across all roles and positions, but most obvious in overembellished developer roles.” ... In general, employers generally say they don’t have a problem with applicants using genAI tools to write a resume, as long as it accurately represents a candidate’s qualifications and experience. ZipRecruiter, an online employment marketplace, said 67% of 800 employers surveyed reported they are open to candidates using genAI to help write their resumes, cover letters, and applications, according to its Q4 2024 Employer Report. Companies, however, face a growing threat from fake job seekers using AI to forge IDs, resumes, and interview responses. By 2028, a quarter of job candidates could be fake, according to Gartner Research. Once hired, impostors can then steal data, money, or install ransomware. ... Another downside to the growing flood of AI deep fake applicants is that it affects “real” job applicants’ chances of being hired.


How Will the Role of Chief AI Officer Evolve in 2025?

For now, the role is less about exploring the possibilities of AI and more about delivering on its immediate, concrete value. “This year, the role of the chief AI officer will shift from piloting AI initiatives to operationalizing AI at scale across the organization,” says Agarwal. And as for those potential upheavals down the road? CAIO officers will no doubt have to be nimble, but Martell doesn’t see their fundamental responsibilities changing. “You still have to gather the data within your company to be able to use with that model and then you still have to evaluate whether or not that model that you built is delivering against your business goals. That has never changed,” says Martell. ... AI is at the inflection point between hype and strategic value. “I think there's going to be a ton of pressure to find the right use cases and deploy AI at scale to make sure that we're getting companies to value,” says Foss. CAIOs could feel that pressure keenly this year as boards and other executive leaders increasingly ask to see ROI on massive AI investments. “Companies who have set these roles up appropriately, and more importantly the underlying work correctly, will see the ROI measurements, and I don't think that chief AI officers [at those] organizations should feel any pressure,” says Mohindra.


Cybercriminals blend AI and social engineering to bypass detection

With improved attack strategies, bad actors have compressed the average time from initial access to full control of a domain environment to less than two hours. Similarly, while a couple of years ago it would take a few days for attackers to deploy ransomware, it’s now being detonated in under a day and even in as few as six hours. With such short timeframes between the attack and the exfiltration of data, companies are simply not prepared. Historically, attackers avoided breaching “sensitive” industries like healthcare, utilities, and critical infrastructures because of the direct impact to people’s lives.  ... Going forward, companies will have to reconcile the benefits of AI with its many risks. Implementing AI solutions expands a company’s attack surface and increases the risk of data getting leaked or stolen by attackers or third parties. Threat actors are using AI efficiently, to the point where any AI employee training you may have conducted is already outdated. AI has allowed attackers to bypass all the usual red flags you’re taught to look for, like grammatical errors, misspelled words, non-regional speech or writing, and a lack of context to your organization. Adversaries have refined their techniques, blending social engineering with AI and automation to evade detection. 


AI in Cybersecurity: Protecting Against Evolving Digital Threats

As much as AI bolsters cybersecurity defenses, it also enhances the tools available to attackers. AI-powered malware, for example, can adapt its behavior in real time to evade detection. Similarly, AI enables cybercriminals to craft phishing schemes that mimic legitimate communications with uncanny accuracy, increasing the likelihood of success. Another alarming trend is the use of AI to automate reconnaissance. Cybercriminals can scan networks and systems for vulnerabilities more efficiently than ever before, highlighting the necessity for cybersecurity teams to anticipate and counteract AI-enabled threats. ... The integration of AI into cybersecurity raises ethical questions that must be addressed. Privacy concerns are at the forefront, as AI systems often rely on extensive data collection. This creates potential risks for mishandling or misuse of sensitive information. Additionally, AI’s capabilities for surveillance can lead to overreach. Governments and corporations may deploy AI tools for monitoring activities under the guise of security, potentially infringing on individual rights. There is also the risk of malicious actors repurposing legitimate AI tools for nefarious purposes. Clear guidelines and robust governance are crucial to ensuring responsible AI deployment in cybersecurity.


AI workloads set to transform enterprise networks

As AI companies leapfrog each other in terms of capabilities, they will be able to handle even larger conversations — and agentic AI may increase the bandwidth requirements exponentially and in unpredictable ways. Any website or app could become an AI app, simply by adding an AI-powered chatbot to it, says F5’s MacVittie. When that happens, a well-defined, structured traffic pattern will suddenly start looking very different. “When you put the conversational interfaces in front, that changes how that flow actually happens,” she says. Another AI-related challenge that networking managers will need to address is that of multi-cloud complexity. ... AI brings in a whole host of potential security problems for enterprises. The technology is new and unproven, and attackers are quickly developing new techniques for attacking AI systems and their components. That’s on top of all the traditional attack vectors, says Rich Campagna, senior vice president of product management at Palo Alto Networks. At the edge, devices and networks are often distributed which leads to visibility blind spots,” he adds. That makes it harder to fix problems if something goes wrong. Palo Alto is developing its own AI applications, Campagna says, and has been for years. And so are its customers. 


Daily Tech Digest - September 09, 2024

Does your organization need a data fabric?

So, while real-time data integration and performing data transformations are key capabilities of data fabrics, their defining capability is in providing centralized, standardized, and governed access to an enterprise’s data sources. “When evaluating data fabrics, it’s essential to understand that they interconnect with various enterprise data sources, ensuring data is readily and rapidly available while maintaining strict data controls,” says Simon Margolis, associate CTO of AI/ML at SADA. “Unlike other data aggregation solutions, a functional data fabric serves as a “one-stop shop” for data distribution across services, simplifying client access, governance, and expert control processes.” Data fabrics thus combine features of other data governance and dataops platforms. They typically offer data cataloging functions so end-users can find and discover the organization’s data sets. Many will help data governance leaders centralize access control while providing data engineers with tools to improve data quality and create master data repositories. Other differentiating capabilities include data security, data privacy functions, and data modeling features.


The Crucial Role of Manual Data Annotation and Labeling in Building Accurate AI Systems

Automatic annotation systems frequently suffer from severe limitations, most notably accuracy. Despite its rapid evolution, AI can still misunderstand context, fail to spot complex patterns, and perpetuate inherent biases in data. For example, an automated annotation system may mislabel an image of a person holding an object because it is unable to handle complicated scenarios or objects that overlap. Similarly, in textual data, automated systems may misread cultural references, idiomatic expressions, or sentiments. ... Manual annotation, on the other hand, uses human expertise to label data, ensuring accuracy, context understanding, and bias reduction. Humans are naturally skilled at understanding ambiguity, context, and making sense of complex patterns that machines may not be able to grasp. This knowledge is critical in applications requiring absolute precision, such as healthcare diagnostics, legal document interpretation, and ethical AI deployment. Manual annotation adds a level of justice that automated procedures typically lack. Human annotators can recognize and mitigate biases in datasets, whether they be racial, gender-based, or cultural. 


AI orchestration: Crafting harmony or creating dependency?

In a collaborative relationship, both parties have an equal and complementary role. AI excels at processing enormous amounts of data, pattern recognition and certain types of analysis, while people excel at creativity, emotional intelligence and complex decision-making. In this relationship, the human keeps agency through critically evaluating AI outputs and making final decisions. However, this relationship can easily veer into dependency where we become unable or unwilling to perform tasks without AI help, even for tasks we could previously do independently. As AI outputs have become amazingly human-like and convincing, it is easy to accept them without critical evaluation or understanding, even when knowing the content may be a hallucination — an AI-generated output that appears convincing but is false or misleading. ... As AI continues to advance and become more indistinguishable from human interaction, the distinction between collaboration and dependency becomes increasingly blurred. Or worse, as leading historian Yuval Noah Harari — who is renowned for his works on the history and future of humankind points out — intimacy is a powerful weapon which can then be used to persuade us.


The deflating AI bubble is inevitable — and healthy

Predicting the future is generally a fool’s errand as Nobel Prize winning physicist, Niels Bohr recognized when he stated, “Prediction is very difficult, especially about the future.” This was particularly true in the early 1990s as the Web started to take off. Even internet pioneer and ethernet standard co-inventor Robert Metcalfe was doubtful of the internet’s viability when he predicted it had a 12-month future in 1995. Two years later, he literally ate his words at the 1997 WWW Conference when he blended a printed copy of his prediction with water and drank it. But there comes a point in a new technology when its potential benefits become clear even if the exact shape of its evolution is opaque. ... Many AI deployments and integrations are not revolutionary, however, but add incremental improvements and value to existing products and services. Graphics and presentation software provider Canva, for example, has integrated Google’s Vertex AI to streamline its video editing offering. Canva users can avoid a number of tedious editing steps to create videos in seconds rather than minutes or hours. And WPP, the global marketing services giant, has integrated Anthropic’s Claude AI service into its internal marketing system, WPP Open.


Blockchain And Quantum Computing Are On A Collision Course

Herman warns, “The real danger regarding the future of blockchain is that it’s used to build critical digital infrastructures before this serious security vulnerability has been fully investigated. Imagine a major insurance company putting at great expense all its customers into a blockchain-based network, and then three years later having to rip it all out to install a quantum-secure network, in its place.” Despite the bleak outlook, Herman offers a solution that lies within the very technology posing the threat. Quantum cryptography, particularly quantum random-number generators and quantum-resistant algorithms, could provide the necessary safeguards to protect blockchain networks from quantum attacks. “Quantum random-number generators are already being implemented today by banks, governments, and private cloud carriers. Adding quantum keys to blockchain software, and to all encrypted data, will provide unhackable security against both a classical computer and a quantum computer,” he notes. Moreover, the U.S. National Institute of Standards and Technology (NIST) has stepped in to address the issue by releasing standards for post-quantum cryptography. 


Low-Code Solutions Gain Traction In Banking And Insurance Digital Transformation

“Digital transformation should be focused on quick wins so that organizations can start seeing the ROI much sooner,” he said, noting that digital transformation is not just about adopting new technologies — it’s about fundamentally rethinking how businesses operate and deliver value to their customers. One of the recurring challenges he identified is the issue of onboarding in the banking sector. Despite variations in onboarding times from one bank to another, internal inefficiencies often cause delays. A portion of these delays stems from internal traffic rather than external factors. To address this, Arun MS advocated for a shift toward self-service portals, where customers can take control of processes like document submission. “Engaging customers as stakeholders in the process reduces internal bottlenecks and speeds up the overall timeline for onboarding,” he said. This approach not only enhances operational efficiency but also improves the customer experience, which is essential in an increasingly digital world. However, Arun MS was quick to caution that transferring processes to customers must be done thoughtfully.


Why We Need AI Professional Practice

AI’s capacity to learn, interpret, and abstract at scale alters how we navigate complex, manifestly unpredictable situations and solutions, and brings an ecosystem-scale vista of possibilities, challenges, and dependencies into view. It forces us to examine every aspect of the human condition and our increasing dependence on the tools we fashion. This is the pillar of “practice’, which will emerge from the need to harness both the immediate and indirect value advanced AI can bring. It is about direct interpretation, implementation, control, and effect, rather than indirect consideration, control, and effect. It is, in metaphorical terms then, about the rubber hitting the road.​ ... As we look at how AI will continue to shape the business landscape, we can see an element that hasn’t received much attention yet: how do we ensure that the right skills, best practices, and standards are developed and shared amongst those managing this AI revolution, and most importantly, how do we uphold the standard of that professional practice? Some voices liken the onset of AI to the invention of the Internet, which reflects the skills that are now required from staff, with new data showing that 66% of business leaders wouldn’t hire someone without AI skills.


AI cybersecurity needs to be as multi-layered as the system it’s protecting

By altering the technical design and development of AI before its training and deployment, companies can reduce their security vulnerabilities before they begin. For example, even selecting the correct model architecture has considerable implications, with each AI model exhibiting particular affinities to mitigate specific types of prompt injection or jailbreaks. Identifying the correct AI model for a given use case is important to its success, and this is equally true regarding security. Developing an AI system with embedded cybersecurity begins with how training data is prepared and processed. Training data must be sanitized and a filter to limit ingested training data is essential. Input restoration jumbles an adversary’s ability to evaluate the input-output relationship of an AI model by adding an extra layer of randomness. Companies should create constraints to reduce potential distortions of the learning model through Reject-On-Negative-Impact training. After that, regular security testing and vulnerability scanning of the AI model should be performed continuously. During deployment, developers should validate modifications and potential tampering through cryptographic checks. 


Kipu Quantum Team Says New Quantum Algorithm Outshines Existing Techniques

Kipu Quantum-led team of researchers announced the successful testing of what they’re labeling the largest quantum optimization problem on a digital quantum computer. They suggest that this is the start of the commercial quantum advantage era. ... Combinatorial optimization is critical in many industries, from logistics and scheduling to computational chemistry and biology. These problems, which involve finding the best or near-optimal solutions in large discrete configuration spaces, are known to be computationally challenging, particularly for classical computing. This complexity has driven the exploration of quantum optimization techniques as an alternative. ... While Kipu Quantum’s BF-DCQO algorithm shows promise, the results are based on simulations and experiments using specific quantum architectures. The 156-qubit experimental validation was performed on IBM’s heavy-hex processor, while the 433-qubit simulation is yet to be fully realized on physical hardware. There are still challenges in scaling the method to address more complex real-world HUBO problems that require larger quantum systems.


Inside the Mind of a Hacker: How Scams Are Carried Out

Hacking is, first and foremost, a mindset. It’s a likely avenue to pursue when you're endowed with an organized mind, a passion for IT, and a boundless curiosity about taking things apart and understanding their inner workings. Since highly publicized cases usually involve the theft of exorbitant sums, it’s logical for the public to assume that monetary gain is the top motivator. While it’s high on the list, studies that explore hacker motivation consistently rank the thrill of circumventing cyber defenses and the accompanying display of one’s mastery as chief driving forces. Hacking is both technical and creative. Successful hacks happen due to a combination of high technical prowess, the ability to grasp and implement novel solutions, and a general disregard for the consequences of those actions. ... The last step involves capitalizing on a hacker’s ill-gotten gains. Those who have managed to convince someone to transfer funds use mule accounts and money laundering schemes to eventually get a hold of them. Hackers who get their hands on a company’s industrial secrets may try to sell them to the competition. Data obtained through breaches finds its way to the dark web, where other hackers may purchase it in bulk.



Quote for the day:

"Listen with curiosity speak with honesty, act with integrity." -- Roy T. Benett

Daily Tech Digest - July 13, 2024

Work in the Wake of AI: Adapting to Algorithmic Management and Generative Technologies

Current legal frameworks are struggling to keep pace with the issues arising from algorithmic management. Traditional employment laws, such as those concerning unfair dismissal, often do not extend protections to “workers” as a distinct category. Furthermore, discrimination laws require proof that the discriminatory behaviour was due or related to the protected characteristic, which is difficult to ascertain and prove with algorithmic systems. To mitigate these issues, the researchers recommend a series of measures. These include ensuring algorithmic systems respect workers’ rights, granting workers the right to opt out of automated decisions such as job termination, banning excessive data monitoring and establishing the right to a human explanation for decisions made by algorithms. ... Despite the rapid deployment of GenAI and the introduction of policies around its use, concerns about misuse are still prevalent among nearly 40% of tech leaders. While recognising AI’s potential, 55% of tech leaders have yet to identify clear business applications for GenAI beyond personal productivity enhancements, and budget constraints remain a hurdle for some.


The rise of sustainable data centers: Innovations driving change

Data centers contribute significantly to global carbon emissions, making it essential to adopt measures that reduce their carbon footprint. Carbon usage effectiveness (CUE) is a metric used to assess a data center's carbon emissions relative to its energy consumption. By minimizing CUE, data centers can significantly lower their environmental impact. ... Cooling is one of the largest energy expenses for data centers. Traditional air cooling systems are often inefficient, prompting the need for more advanced solutions. Free cooling, which leverages outside air, is a cost-effective method for data centers in cooler climates. Liquid cooling, on the other hand, uses water or other coolants to transfer heat away from servers more efficiently than air. ... Building and retrofitting data centers sustainably involves adhering to green building certifications like Leadership in Energy and Environmental Design (LEED) and Building Research Establishment Environmental Assessment Method (BREEAM). These certifications ensure that buildings meet high environmental performance standards.


How AIOps Is Poised To Reshape IT Operations

A meaningfully different, as yet underutilized, high-value data set can be derived from the rich, complex interactions of information sources and users on the network, promising to triangulate and correlate with the other data sets available, elevating their combined value to the use case at hand. The challenge in leveraging this source is that the raw traffic data is impossibly massive and too complex for direct ingestion. Further, even compressed into metadata, without transformation, it becomes a disparate stream of rigid, high-cardinality data sets due to its inherent diversity and complexity. A new breed of AIOps solutions is poised to overcome this data deficiency and transform this still raw data stream into refined collections of organized data streams that are augmented and edited through intelligent feature extraction. These solutions use an adaptive AI model and a multi-step transformation sequence to work as an active member of a larger AIOps ecosystem by harmonizing data feeds with the workflows running on the target platform, making it more relevant and less noisy.


Addressing Financial Organizations’ Digital Demands While Avoiding Cyberthreats

The financial industry faces a difficult balancing act, with multiple conflicting priorities at the forefront. Organizations must continually strengthen security around their evolving solutions to keep up in an increasingly competitive and fast-moving landscape. But while strong security is a requirement, it cannot impact usability for customers or employees in an industry where accessibility, agility and the overall user experience are key differentiators. One of the best options to balancing these priorities is the utilization of secure access service edge (SASE) solutions. This model integrates several different security features such as secure web gateway (SWG), zero-trust network access (ZTNA), next-generation firewall (NGFW), cloud access security broker (CASB), data loss prevention (DLP) and network management functions, such as SD-WAN, into a single offering delivered via the cloud. Cloud-based delivery enables financial organizations to easily roll out SASE services and consistent policies to their entire network infrastructure, including thousands of remote workers scattered across various locations, or multiple branch offices to protect private data and users, as well as deployed IoT devices.


Three Signs You Might Need a Data Fabric

One of the most significant challenges organizations face is data silos and fragmentation. As businesses grow and adopt new technologies, they often accumulate disparate data sources across different departments and platforms. These silos make it tougher to have a holistic view of your organization's data, resulting in inefficiencies and missed opportunities. ... You understand that real-time analytics is crucial to your organization’s success. You need to respond quickly to changing market conditions, customer behavior, and operational events. Traditional data integration methods, which often rely on batch processing, can be too slow to meet these demands. You need real-time analytics to:Manage the customer experience. If enhancing a customer’s experience through personalized and timely interactions is a priority, real-time analytics is essential. Operate efficiently. Real-time monitoring and analytics can help optimize operations, reduce downtime, and improve overall efficiency. Handle competitive pressure. Staying ahead of competitors requires quick adaptation to market trends and consumer demands, which is facilitated by real-time insights.


The Tension Between The CDO & The CISO: The Balancing Act Of Data Exploitation Versus Protection

While data delivers a significant competitive advantage to companies when used appropriately, without the right data security measures in place it can be misused. This not only erodes customers’ trust but also puts the company at risk of having to pay penalties and fines for non-compliance with data security regulations. As data teams aim to extract and exploit data for the benefit of the organisation, it is important to note that not all data is equal. As such a risk-based approach must be in place to limit access to sensitive data across the organisation. In doing this the IT system will have access to the full spectrum of data to join and process the information, run through models and identify patterns, but employees rarely need access to all this detail. ... To overcome the conflict of data exploitation versus security and deliver a customer experience that meets customer expectations, data teams and security teams need to work together to achieve a common purpose and align on the culture. To achieve this each team needs to listen to and understand their respective needs and then identify solutions that work towards helping to make the other team successful.


Content Warfare: Combating Generative AI Influence Operations

Moderating such enormous amounts of content by human beings is impossible. That is why tech companies now employ artificial intelligence (AI) to moderate content. However, AI content moderation is not perfect, so tech companies add a layer of human moderation for quality checks to the AI content moderation processes. These human moderators, contracted by tech companies, review user-generated content after it is published on a website or social media platform to ensure it complies with the “community guidelines” of the platform. However, generative AI has forced companies to change their approach to content moderation. ... Countering such content warfare requires collaboration across generative AI companies, social media platforms, academia, trust and safety vendors, and governments. AI developers should build models with detectable and fact-sensitive outputs. Academics should research the mechanisms of foreign and domestic influence operations emanating from the use of generative AI. Governments should impose restrictions on data collection for generative AI, impose controls on AI hardware, and provide whistleblower protection to staff working in the generative AI companies. 


OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

OpenAI isn't alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI's system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don't yet exist. OpenAI's classification system also somewhat resembles Anthropic's "AI Safety Levels" (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic's ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to "systems that show early signs of dangerous capabilities"), while OpenAI's levels track general capabilities. However, any AI classification system raises questions about whether it's possible to meaningfully quantify AI progress and what constitutes an advancement. The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI's potentially risk fueling unrealistic expectations.


White House Calls for Defending Critical Infrastructure

The memo encourages federal agencies "to consult with regulated entities to establish baseline cybersecurity requirements that can be applied across critical infrastructures" while maintaining agility and adaptability to mature with the evolving cyberthreat landscape. ONCD and OMB also urged agencies and federal departments to study open-source software initiatives and the benefits that can be gained by establishing a governance function for open-source projects modeled after the private sector. Budget submissions should identify existing departments and roles designed to investigate, disrupt and dismantle cybercrimes, according to the memo, including interagency task forces focused on combating ransomware infrastructure and the abuse of virtual currency. Meanwhile, the administration is continuing its push for agencies to only use software provided by developers who can attest their compliance with minimum secure software development practices. The national cyber strategy - as well as the joint memo - directs agencies to "utilize grant, loan and other federal government funding mechanisms to ensure minimum security and resilience requirements" are incorporated into critical infrastructure projects.


Unifying Analytics in an Era of Advanced Tech and Fragmented Data Estates

“Data analytics has a last-mile problem,” according to Alex Gnibus, technical product marketing manager, architecture at Alteryx. “In shipping and transportation, you often think of the last-mile problem as that final stage of getting the passenger or the delivery to its final destination. And it’s often the most expensive and time-consuming part.” For data, there is a similar problem; when putting together a data stack, enabling the business at large to derive value from the data is a key enabler—and challenge—of a modern enterprise. Achieving business value from data is the last mile, which is made difficult by complex, numerous technologies that are inaccessible to the final business user. Gnibus explained that Alteryx solves this problem by acting as the “truck” that delivers tangible business value from proprietary data, offering data discovery, use case identification, preparation and analysis, insight-sharing, and AI-powered capabilities. Acting as the easy-to-use interface for a business’ data infrastructure, Alteryx is the AI platform for large-scale enterprise analytics that offers no-code, drag-and-drop functionality that works with your unique data framework configuration as it evolves.



Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel