Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Daily Tech Digest - February 16, 2026


Quote for the day:

"People respect leaders who share power and despise those who hoard it." -- Gordon Tredgold



TheCUBE Research 2026 predictions: The year of enterprise ROI

Fourteen years into the modern AI era, our research indicates AI is maturing rapidly. The data suggests we are entering the enterprise productivity phase, where we move beyond the novelty of retrieval-augmented-generation-based chatbots and agentic experimentation. In our view, 2026 will be remembered as the year that kicked off decades of enterprise AI value creation. ... Bob Laliberte agreed the prediction is plausible and argued OpenAI is clearly pushing into the enterprise developer segment. He said the consumerization pattern is repeating – consumer adoption often drives faster enterprise adoption – and he viewed OpenAI’s Super Bowl presence as a flag in the ground, with Codex ads and meaningful spend behind them. He said he is hearing from enterprises using Codex in meaningful ways, including cases where as much as three quarters of programming is done with Codex, and discussions of a first 100% Codex-developed product. He emphasized that driving broader adoption requires leaning on early adopters, surfacing use cases, and showing productivity gains so they can be replicated across environments. ... Paul Nashawaty said application development is bifurcating. Lines of business and citizen developers are taking on more responsibility for work that historically sat with professional developers. He said professional developers don’t go away – their work shifts toward “true professional development,” while line of business developers focus on immediate outcomes.


Snowflake CEO: Software risks becoming a “dumb data pipe” for AI

Ramaswamy argues that his company lives with the fear that organizations will stop using AI agents built by software vendors. There must certainly be added value for these specialized agents, for example, that they are more accurate, operate more securely, and are easier to use. For experienced users of existing platforms, this is already the case. A solution such as NetSuite or Salesforce offers AI functionality as an extension of familiar systems, whereby adoption of these features almost always takes place without migration. Ramaswamy believes that customers have the final say on this. If they want to consult a central AI and ignore traditional enterprise apps, then they should be given that option, according to the Snowflake CEO. ... However, the tug-of-war around the center of AI is in full swing. It is not without reason that vendors claim that their solution should be the central AI system, for example because they contain enormous amounts of data or because they are the most critical application for certain departments. So far, AI trends among these vendors have revolved around the adoption of AI chatbots, easy-to-set-up or ready-made agentic workflows, and automatic document generation. During several IT events over the past year, attendees toyed with the idea that old interfaces may disappear because every employee will be talking to the data via AI.


Will LLMs Become Obsolete?

“We are at a unique time in history,” write Ashu Garg and Jaya Gupta at Foundation Capital, citing multimodal systems, multiagent systems, and more. “Every layer in the AI stack is improving exponentially, with no signs of a slowdown in sight. As a result, many founders feel that they are building on quicksand. On the flip side, this flywheel also presents a generational opportunity. Founders who focus on large and enduring problems have the opportunity to craft solutions so revolutionary that they border on magic.” ... “When we think about the future of how we can use agentic systems of AI to help scientific discovery,” Matias said, “what I envision is this: I think about the fact that every researcher, even grad students or postdocs, could have a virtual lab at their disposal ...” ... In closing, Matias described what makes him enthusiastic about the future. “I'm really excited about the opportunity to actually take problems that make a difference, that if we solve them, we can actually have new scientific discovery or have societal impact,” he said. “The ability to then do the research, and apply it back to solve those problems, what I call the ‘magic cycle’ of research, is accelerating with AI tools. We can actually accelerate the scientific side itself, and then we can accelerate the deployment of that, and what would take years before can now take months, and the ability to actually open it up for many more people, I think, is amazing.”


Deepfake business risks are growing – here's what leaders need to know

The risk of deepfake attacks appears to be growing as the technology becomes more accessible. The threat from deepfakes has escalated from a “niche concern” to a “mainstream cybersecurity priority” at “remarkable speed”, says Cooper. “The barrier to entry has lowered dramatically thanks to open source software and automated creation tools. Even low-skilled threat actors can launch highly convincing attacks.” The target pool is also expanding, says Cooper. “As larger corporations invest in advanced mitigation strategies, threat actors are turning their attention to small and medium-sized businesses, which often lack the resources and dedicated cybersecurity teams to combat these threats effectively.” The technology itself is also improving. Deepfakes have already improved “a staggering amount” – even in the past six months, says McClain. “The tech is internalising human mannerisms all the time. It is already widely accessible at a consumer level, even used as a form of entertainment via face swap apps.” ... Meanwhile, technology can be helpful in mitigating deepfake attack risks. Cooper recommends deepfake detection tools that use AI to analyse facial movements, voice patterns and metadata in emails, calls and video conferences. “While not foolproof, these tools can flag suspicious content for human review.” With the risks in mind, it also makes sense to implement multi-factor authentication for sensitive requests. 


The Big Shift: From “More Qubits” to Better Qubits

As quantum systems grew, it became clear that more qubits do not always mean more computing power. Most physical qubits are too noisy, unstable, and short-lived to run useful algorithms. Errors pile up faster than useful results, and after a while, the output stops making sense. Adding more fragile qubits now often makes things worse, not better. This realization has led to a shift in thinking across the field. Instead of asking how many qubits fit on a chip, researchers and engineers now ask a tougher question: how many of those qubits can actually be trusted? ... For businesses watching from the outside, this change matters. It is easier to judge claims when vendors talk about error rates, runtimes, and reliability instead of vague promises. It also helps set realistic expectations. Logical qubits show that early useful systems will be small but stable, solving specific problems well instead of trying to do everything. This new way of thinking also changes how we look at risk. The main risk is not that quantum computing will fail completely. Instead, the risk is that organizations will misunderstand early progress and either invest too much because of hype or too little because of old ideas. Knowing how important error correction is helps clear up this confusion. One of the clearest signs of maturity is how failure is handled. In early science, failure can be unclear. 


Reimagining digital value creation at Inventia Healthcare

“The business strategy and IT strategy cannot be two different strategies altogether,” he explains. “Here at Inventia, IT strategy is absolutely coupled with the core mission of value-added oral solid formulations. The focus is not on deploying systems, it is on creating measurable business value.” Historically, the pharmaceutical industry has been perceived as a laggard in technology adoption, largely due to stringent regulatory requirements. However, this narrative has shifted significantly over the last five to six years. “Regulators and organisations realised that without digitalisation, it is impossible to reach the levels of efficiency and agility that other industries have achieved,” notes Nandavadekar. “Compliance is no longer a barrier, it is an enabler when implemented correctly.” ... “Digitalisation mandates streamlined and harmonised operations. Once all processes are digital, we can correlate data across functions and even correlate how different operations impact each other,” points out Nandavadekar. ... With expanding digital footprints across cloud, IoT, and global operations, cybersecurity has become a mission-critical priority for Inventia. Nandavadekar describes cybersecurity as an “iceberg,” where visible threats represent only a fraction of the risk landscape. “In the pharmaceutical world, cybersecurity is not just about hackers, it is often a national-level activity. India is emerging as a global pharma hub, and that makes us a strategic target.”


Scaling Agentic AI: When AI Takes Action, the Real Challenge Begins

Organizations often underestimate tool risk. The model is only one part of the decision chain. The real exposure comes from the tools and APIs the agent can call. If those are loosely governed, the agent becomes privileged automation moving faster than human oversight can keep up. “Agentic AI does not just stress models. It stress-tests the enterprise control plane.” ... Agentic AI requires reliable data, secure access, and strong observability. If data quality is inconsistent and telemetry is incomplete, autonomy turns into uncertainty. Leaders need a clear method to select use cases based on business value, feasibility, risk class, and time-to-impact. The operating model should enforce stage gates and stop low-value projects early. Governance should be built into delivery through reusable patterns, reference architectures, and pre-approved controls. When guardrails are standardized, teams move faster because they no longer have to debate the same risk questions repeatedly. ... Observability must cover the full chain, not just model performance. Teams should be able to trace prompts, context, tool calls, policy decisions, approvals, and downstream outcomes. ... Agentic AI introduces failure modes that can appear plausible on the surface. Without traceability and real-time signals, organizations are forced to guess, and guessing is not an operating strategy.


Security at AI speed: The new CISO reality

The biggest shift isn’t tooling, we’ve always had to choose our platforms carefully, it’s accountability. When an AI agent acts at scale, the CISO remains accountable for the outcome. That governance and operating model simply didn’t exist a decade ago. Equally, CISOs now carry accountability for inaction. Failing to adopt and govern AI-driven capabilities doesn’t preserve safety, it increases exposure by leaving the organization structurally behind. The CISO role will need to adopt a fresh mindset and the skills to go with it to meet this challenge. ... While quantification has value, seeking precision based on historical data before ensuring strong controls, ownership, and response capability creates a false sense of confidence. It anchors discussion in technical debt and past trends, rather than aligning leadership around emerging risks and sponsoring a bolder strategic leap through innovation. That forward-looking lens drives better strategy, faster decisions, and real organizational resilience. ... When a large incumbent experiences an outage, breach, model drift, or regulatory intervention, the business doesn’t degrade gracefully, it fails hard. The illusion of safety disappears quickly when you realise you don’t own the kill switches, can’t constrain behaviour in real time, and don’t control the recovery path. Vendor scale does not equal operational resilience.


Why Borderless AI Is Coming to an End

Most countries are still wrestling with questions related to "sovereign AI" - the technical ambition to develop domestic compute, models and data capabilities - and "AI sovereignty" - the political and legal right to govern how AI operates within national boundaries, said Gaurav Gupta, vice president analyst at Gartner. Most national strategies today combine both. "There is no AI journey without thinking geopolitics in today's world," said Akhilesh Tuteja, partner, advisory services and former head of cybersecurity at KPMG. ... Smaller nations, Gupta said, are increasing their investment in domestic AI stacks as they look for alternatives to the closed U.S. model, including computing power, data centers, infrastructure and models aligned with local laws, culture and region. "Organizations outside the U.S. and China are investing more in sovereign cloud IaaS to gain digital and technological independence," said Rene Buest, senior director analyst at Gartner. "The goal is to keep wealth generation within their own borders to strengthen the local economy." ... The practical barriers to AI sovereignty start with infrastructure. The level of investment is beyond the reach of most countries, creating a fundamental asymmetry in the global AI landscape. "One gigawatt new data centers cost north of $50 billion," Gupta said. "The biggest constraint today is availability of power … You are now competing for electricity with residential and other industrial use cases."


Why Data Governance Fails in Many Organizations: The IT-Business Divide

The problem extends beyond missing stewardship roles to a deeper documentation chaos. Organizations often have multiple documents addressing the same concepts, but the language varies depending on which unit you ask, when you ask, and to whom you’re speaking. Some teams call these documents “policies,” while others use terms like “guidelines,” “standards,” or “procedures.” With no clarity on which term means what or whether these documents represent the same authority level. More critically, no one has the responsibility or authority to define which version is the “appropriate” one. Documents get written – often as part of project deliverables or compliance exercises – but no governance process ensures they’re actually embedded into operations, kept current, or reconciled with other documents covering similar ground. ... Without proper governance, a problematic pattern emerges: Technical teams impose technical obligations on business people, requiring them to validate data formats, approve schema changes, or participate in narrow technical reviews, while the real governance questions go unaddressed. Business stakeholders are involved only in a few steps of the data lifecycle, without understanding the whole picture or having authority over business-critical decisions. ... The governance challenges become even more insidious when organizations produce reports that appear identical in format while concealing fundamental differences in their underlying methodology. 

Daily Tech Digest - October 18, 2024

Breaking Barriers: The Power of Cross-Departmental Collaboration in Modern Business

In an era of rapid change and increasing complexity, cross-departmental collaboration is no longer a luxury but a necessity. By dismantling silos, fostering trust, and leveraging technology, organizations can unlock their full potential, drive innovation, and enhance customer satisfaction. While industry leaders have shown the way, the journey to a truly collaborative culture requires sustained effort and adaptation. To embark on this collaborative journey, organizations must prioritize collaboration as a core value, invest in leadership development, empower employees, leverage technology, and measure progress. Creating a collaborative culture is like building a bridge between departments: it requires strong foundations, continuous maintenance, and a shared vision. By doing so, they can create a culture where innovation thrives, employees are engaged, and customers benefit from improved products and services. Looking ahead, successful organizations will not only embrace collaboration but also anticipate its evolution in response to emerging trends like remote work, artificial intelligence, and data privacy. By proactively addressing these challenges and opportunities, businesses can position themselves as leaders in the collaborative economy.


Singapore releases guidelines for securing AI systems and prohibiting deepfakes in elections

"AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI system," said Singapore's Cyber Security Agency (CSA). "The adoption of AI can also exacerbate existing cybersecurity risks to enterprise systems, [which] can lead to risks such as data breaches or result in harmful, or otherwise undesired model outcomes." "As such, AI should be secure by design and secure by default, as with all software systems," the government agency said. ... "The Bill is scoped to address the most harmful types of content in the context of elections, which is content that misleads or deceives the public about a candidate, through a false representation of his speech or actions, that is realistic enough to be reasonably believed by some members of the public," Teo said. "The condition of being realistic will be objectively assessed. There is no one-size-fits-all set of criteria, but some general points can be made." These encompass content that "closely match[es]" the candidates' known features, expressions, and mannerisms, she explained. The content also may use actual persons, events, and places, so it appears more believable, she added.


2025 and Beyond: CIOs' Guide to Stay Ahead of Challenges

As enterprises move beyond the "experiment" or the "proof of concept" stage, it is time to design and formalize a well-thought-out AI strategy that is tailored to their unique business needs. According to Gartner, while 92% of CIOs anticipate AI will be integrated into their organizations by 2025 - broadly driven by increasing pressure from CEOs and boards - 49% of leaders admit their organizations struggle to assess and showcase AI's value. That's where the strategy kicks in. ... Forward-looking CIOs are focused on using data for decision-making while tackling challenges related to its quality and availability. Data governance is a crucial aspect to deal with. As data systems become more interconnected, managing complexity is crucial. Going forward, CIOs will have to focus on optimizing current systems, high level of data literacy, complexity management and establishing strong governance. The importance of shifting IT from a cost center to a profit driver lies in focusing on data-driven revenue generation, said Eric Johnson ... CIOs should be able to communicate the strategic use of IT investment and present it as a core enabler for competitiveness. 


5 Ways to Reduce SaaS Security Risks

It's important to understand what corporate assets are visible to attackers externally and, therefore, could be a target. Arguably, the SaaS attack surface extends to every SaaS, IaaS and PaaS application, account, user credential, OAuth grant, API, and SaaS supplier used in your organization—managed or unmanaged. Monitoring this attack surface can feel like a Sisyphean task, given that any user with a credit card, or even just a corporate email address, has the power to expand the organization's attack surface in just a few clicks. ... Single sign-on (SSO) provides a centralized place to manage employees' access to enterprise SaaS applications, which makes it an integral part of any modern SaaS identity and access governance program. Most organizations strive to ensure that all business-critical applications (i.e., those that handle customer data, financial data, source code, etc.) are enrolled in SSO. However, when new SaaS applications are introduced outside of IT governance processes, this makes it difficult to truly assess SSO coverage. ... Multi-factor authentication adds an extra layer of security to protect user accounts from unauthorized access. By requiring multiple factors for verification, such as a password and a unique code sent to a mobile device, it significantly decreases the chances of hackers gaining access to sensitive information. 


World’s smallest quantum computer unveiled, solves problems with just 1 photon

In the new study, the researchers successfully implemented Shor’s algorithm using a single photon by encoding and manipulating 32 time-bin modes within its wave packet. This achievement highlights the strong information-processing capabilities of a single photon in high dimensions. According to the team, with commercially available electro-optic modulators capable of 40 GHz bandwidth, it is feasible to encode over 5,000 time-bin modes on long single photons. While managing high-dimensional states can be more challenging than working with qubits, this work demonstrates that these time-bin states can be prepared and manipulated efficiently using a compact programmable fiber loop. Additionally, high-dimensional quantum gates can enhance manipulation, using multiple photons for scalability. Reducing the number of single-photon sources and detectors can improve the efficiency of counting coincidences over accidental counts. Research indicates that high-dimensional states are more resistant to noise in quantum channels, making time-bin-encoded states of long single photons promising for future high-dimensional quantum computing.


Google creates the Mother of all Computers: One trillion operations per second and a mystery

The capability of exascale computing to handle massive amounts of data and run through simulation has created new avenues for scientific modeling. From mimicking black holes and the birth of galaxies to introducing newer and evolved treatments and diagnoses through customized genome mapping across the globe, this technology has all the potential to burst open newer frontiers of knowledge about the cosmos. While current supercomputers would otherwise spend years solving computations, exascale machines will pave the way to areas of knowledge that were previously uncharted. For instance, the exascale solution in astrophysics holds the prospect of modeling many phenomena, such as star and galaxy formation, with higher accuracy. For example, these simulations could reveal new detections of the fundamental laws of physics and be used to answer questions about the universe’s formation. In addition, in fields like particle physics, researchers could analyze data from high-energy experiments far more efficiently and perhaps discover more about the nature of matter in the universe. AI is another area to benefit from exascale computing for a supercharge in performance. Present models of AI are very efficient, but the current computing machines constrain them. 


Taming the Perimeter-less Nature of Global Area Networks

The availability of data and intelligence from across the global span of the network is significantly effective in helping ITOps teams understand all the component services and providers their business has exposure to or reliance on. It means being able to pinpoint an impending problem or the root cause of a developing issue within their global area network and to pursue remediation with the right third-party provider ... Certain traffic engineering actions taken on owned infrastructure can alter connectivity and performance by altering the path that traffic takes in the unowned infrastructure portion of the global area network. Consider these actions as adjustments to a network segment that is within your control, such as a network prefix or a BGP route change to bypass a route hijack happening downstream in the unowned Internet-based segment. These traffic engineering actions are manageable tasks that ITOps teams or their automated systems can execute within a global area network setup. While they are implemented in the parts of the network directly controlled by ITOps, their impact is designed to span the entire service delivery chain and its performance. 


Firms use AI to keep reality from unreeling amid ‘global deepfake pandemic’

Seattle-based Nametag has announced the launch of its Nametag Deepfake Defense product. A release quotes security technologist and cryptography expert Bruce Schneier, who says “Nametag’s Deepfake Defense engine is the first scalable solution for remote identity verification that’s capable of blocking the AI deepfake attacks plaguing enterprises.” And make no mistake, says Nametag CEO Aaron Painter: “we’re facing a global deepfake pandemic that’s spreading ransomware and disinformation.” The company cites numbers from Deloitte showing that over 50 percent of C-suite executives expect an increase in the number and size of deepfake attacks over the next 12 months. Deepfake Defense consists of three core proprietary technologies: Cryptographic Attestation, Adaptive Document Verification and Spatial Selfie. The first “blocks digital injection attacks and ensures data integrity using hardware-backed keystore assurance and secure enclave technology from Apple and Google.” The second “prevents ID presentation attacks using proprietary AI models and device telemetry that detect even the most sophisticated digital manipulation or forgery.” 


Evolving Data Governance in the Age of AI: Insights from Industry Experts

While evolving existing data governance to meet AI needs is crucial, many organizations need to advance their DG first, before delving into AI governance. Existing data quality does not cover AI requirements. As mentioned in the previous section, currently DG programs enforce roles, procedures and tools for some structured data throughout the company. Yet AI models learn from and use very large data sets, containing structured and unstructured data. All this data needs to be of good quality too, so that the AI model can respond accurately, completely, consistently, and relevantly. Companies frequently struggle to determine if their unstructured data, including videos and PowerPoint slides, is of sufficient quality for AI training and implementation. If organizations don’t address this issue, Haskell said, they “throw dollars at AI and AI tools,” because the bad data quality inputted gets outputted. For this reason, the pressures of data quality fundamentals and clean-up take higher importance over the drive to implement AI. O’Neal likened AI and its governance to an iceberg. The CEO and senior management see only the tip, visible with all of AI’s promise and reward. 


On the Road to 2035, Banking Will Walk One of These Three Paths

Economist Impact’s latest report walks through three different potential scenarios that the banking sector will zero in on by 2035. Each paints a vivid picture of how technological advancements, shifting consumer expectations and evolving global dynamics could reshape the financial world as we know it. ... Digital transformation will be central to banking’s future, regardless of which scenario unfolds. Banks that fail to innovate and adapt to new technologies risk becoming obsolete. Trust will be a critical currency in the banking sector of 2035. Whether it’s through enhanced data protection, ethical AI use, or commitment to sustainability, banks must find ways to build and maintain customer trust in an increasingly complex world. The role of banks is likely to expand beyond traditional financial services. In all scenarios, we see banks taking on new responsibilities, whether it’s driving sustainable development, bridging geopolitical divides, or serving as the backbone for broader digital ecosystems. Flexibility and adaptability will be crucial for success. The future is uncertain and potentially fragmented, requiring banks to be agile in their strategies and operations to thrive in various possible environments.



Quote for the day:

"The secret of my success is a two word answer: Know people." -- Harvey S. Firestone

Daily Tech Digest - August 14, 2024

MIT releases comprehensive database of AI risks

While numerous organizations and researchers have recognized the importance of addressing AI risks, efforts to document and classify these risks have been largely uncoordinated, leading to a fragmented landscape of conflicting classification systems. ... The AI Risk Repository is designed to be a practical resource for organizations in different sectors. For organizations developing or deploying AI systems, the repository serves as a valuable checklist for risk assessment and mitigation. “Organizations using AI may benefit from employing the AI Risk Database and taxonomies as a helpful foundation for comprehensively assessing their risk exposure and management,” the researchers write. “The taxonomies may also prove helpful for identifying specific behaviors which need to be performed to mitigate specific risks.” ... The research team acknowledges that while the repository offers a comprehensive foundation, organizations will need to tailor their risk assessment and mitigation strategies to their specific contexts. However, having a centralized and well-structured repository like this reduces the likelihood of overlooking critical risks.


Why Agile Alone Might Not Be So Agile: A Witty Look at Methodology Madness

Agile’s problems often start with a fundamental misunderstanding of what it truly means to be agile. When the Agile Manifesto was penned back in 2001, its authors intended it to be a flexible, adaptable approach to software development, free from the rigid structures and bureaucratic procedures of traditional methodologies. But fast forward to today, and Agile has become its own kind of bureaucratic monster in many organizations — a tyrant disguised as a liberator. Why does this happen? Let’s dissect the two main problems: the roles defined within Agile and the one-size-fits-all mentality that organizations apply to Agile methodology. One of the biggest hurdles to successful Agile adoption is the disconnect between the executive suite and the teams on the ground. Executives often see Agile as a magic bullet for faster delivery and higher productivity, without fully understanding the nuances of the methodology. This disconnect can lead to unrealistic demands and pressure on teams to deliver more with each Sprint, which in turn leads to burnout and decreased quality. Moreover, the Agile Manifesto’s disdain for comprehensive documentation can be problematic in complex projects. 


Feature Flags Wouldn’t Have Prevented the CrowdStrike Outage

Feature flagging is a valuable technique for decoupling the release of new features from code deployment, and advanced feature flagging tools usually support percentage-based rollouts. For example, you can enable a feature on X% of targets to ensure it works before reaching 100%. While it’s true that feature flags can help to prevent outages, given the scale and complexity of the CrowdStrike incident, they would not have been sufficient for three reasons. First, a comprehensive staged rollout requires more than just “gradually enable this flag over the next few days”:There has to be an integration with the monitoring stack to perform health checks and stop the rollout if there are problems. There has to be a way to integrate with the CD pipeline to reuse the list of targets to roll out to and a list of health checks to track. Available feature flagging solutions require much work and expertise to support staged rollout at any reasonable scale. Second, CrowdStrike’s config had a complex structure requiring a “configuration system” and a “content interpreter.” Such configs would benefit from first-class schema support and end-to-end type safety. 


Putting Threat Modeling Into Practice: A Guide for Business Leaders

One of the primary benefits of threat modeling is its ability to reduce the number of defects that make it to production. By identifying potential threats and vulnerabilities during the design phase, companies can implement security measures that prevent these issues from ever reaching the production environment. This proactive approach not only improves the quality of products but also reduces the costs associated with post-production fixes and patches. ... Threat modeling helps us create reusable artifacts and reference patterns as code, which serve as blueprints for future projects. These patterns encapsulate best practices and lessons learned, ensuring that security considerations are consistently applied across all projects. By embedding these reference patterns into development processes, organizations reduce the need to reinvent the wheel for each new product, saving time and resources. ... The existence of well-defined reference patterns reduces the likelihood of errors during development. Developers can rely on these patterns as a guide, ensuring that they follow proven security practices without having to start from scratch. 


The magic of RAG is in the retrieval

The role of the LLM in a RAG system is to simply summarize the data from the retrieval model’s search results, with prompt engineering and fine-tuning to ensure the tone and style are appropriate for the specific workflow. All the leading LLMs on the market support these capabilities, and the differences between them are marginal when it comes to RAG. Choose an LLM quickly and focus on data and retrieval. RAG failures primarily stem from insufficient attention to data access, quality, and retrieval processes. For instance, merely inputting large volumes of data into an LLM with an expansive context window is inadequate if the data is excessively noisy or irrelevant to the specific task. Poor outcomes can result from various factors: a lack of pertinent information in the source corpus, excessive noise, ineffective data processing, or the retrieval system’s inability to filter out irrelevant information. These issues lead to low-quality data being fed to the LLM for summarization, resulting in vague or junk responses. It’s important to note that this isn’t a failure of the RAG concept itself. Rather, it’s a failure in constructing an appropriate “R” — the retrieval model.


What enterprises say the CrowdStrike outage really teaches

CrowdStrike made two errors, enterprises say. First, CrowdStrike didn’t account for the sensitivity of its Falcon client software for endpoints to the tabular data that described how to look for security issues. As a result, an update to that data crashed the client by introducing a condition that had existed before but hadn’t been properly tested. Second, rather than doing a limited release of the new data file that would almost certainly have caught the problem and limited its impact, CrowdStrike pushed it out to its entire user base. ... The 37 who didn’t hold Microsoft accountable pointed out that security software necessarily has a unique ability to interact with the Windows kernel software, and this means it can create a major problem if there’s an error. But while enterprises aren’t convinced that Microsoft contributed to the problem, over three-quarters think Microsoft could contribute to reducing the risk of a recurrence. Nearly as many said that they believed Windows was more prone to the kind of problem CrowdStrike’s bug created, and that view was held by 80 of the 89 development managers, many of whom said that Apple’s MacOS or Linux didn’t pose the same risk and that neither was impacted by the problem.


MIT researchers use large language models to flag problems in complex systems

The researchers developed a framework, called SigLLM, which includes a component that converts time-series data into text-based inputs an LLM can process. A user can feed these prepared data to the model and ask it to start identifying anomalies. The LLM can also be used to forecast future time-series data points as part of an anomaly detection pipeline. While LLMs could not beat state-of-the-art deep learning models at anomaly detection, they did perform as well as some other AI approaches. If researchers can improve the performance of LLMs, this framework could help technicians flag potential problems in equipment like heavy machinery or satellites before they occur, without the need to train an expensive deep-learning model. “Since this is just the first iteration, we didn’t expect to get there from the first go, but these results show that there’s an opportunity here to leverage LLMs for complex anomaly detection tasks,” says Sarah Alnegheimish, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on SigLLM.


Cybersecurity should return to reality and ditch the hype

This shift from educational content to marketing blurs the line between genuine security insights and commercial interests, leading organizations to invest in solutions that may not address their unique challenges. Additionally, buzzword-driven content has become rampant, where terms like “zero-trust architecture” or “blockchain for security” are frequently mentioned in passing without delving into the practicalities and limitations of these technologies. ... we must first recognize the critical distinction between genuine cybersecurity work and the broader tech-centric content that often overshadows it. Real cybersecurity practice is anchored in a relentless pursuit to understand and mitigate the ever-evolving threats to our systems. It is a discipline that demands deep, continuously updated knowledge of systems, networks, and human behavior, alongside a steadfast commitment to the principles of confidentiality, integrity, and availability. True cybersecurity practitioners are those who engage in the laborious tasks of vulnerability assessment, threat modeling, incident response, and the continuous enhancement of security postures, often without the allure of viral recognition or simplistic solutions.


Harnessing AI for 6G: Six Key Approaches for Technology Leaders

Leaders must understand the enabling technologies behind 6G, such as terahertz and quantum communication, and the transformative potential of AI in network deployment and management. ... Engaging with international bodies like the ITU to contribute to the standardization process is crucial. This will ensure AI technologies are integrated into network designs from the beginning. Early involvement in these discussions will also help technology leaders to anticipate future developments and prepare strategies accordingly. ... Advocating for an AI-native 6G network involves embedding large language models and other AI technology into network equipment. This strategy allows autonomous operations and optimizes network management through machine learning algorithms. Such a proactive approach will streamline operations and enhance the reliability and efficiency of the network infrastructure. ... Emphasize the convergence of computing and communication and develop user-centric services that leverage 6G and AI to improve user experiences across various industries. Leaders should focus on creating solutions that are not only technologically advanced but also address the practical needs and preferences of end-users.


GenAI compliance is an oxymoron. Ways to make the best of it

Confoundingly, genAI software sometimes does things that neither the enterprise nor the AI vendor told it to do. Whether that’s making things up (a.k.a. hallucinating), observing patterns no one asked it to look for, or digging up nuggets of highly sensitive data, it spells nightmares for CIOs. This is especially true when it comes to regulations around data collection and protection. How can CIOs accurately and completely tell customers what data is being collected about them and how it is being used when the CIO often doesn’t know exactly what a genAI tool is doing? What if the licensed genAI algorithm chooses to share some of that ultra-sensitive data with its AI vendor parent? “With genAI, the CIO is consciously taking an enormous risk, whether that is legal risk or privacy policy risks. It could result in a variety of outcomes that are unpredictable,” said Tony Fernandes, founder and CEO of user experience agency UEGroup. “If a person chooses not to disclose race, for example, but an AI is able to infer it and the company starts marketing on that basis, have they violated the privacy policy? That’s a big question that will probably need to be settled in court,” he said.



Quote for the day:

"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch

Daily Tech Digest - July 07, 2024

How Good Is ChatGPT at Coding, Really?

A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI’s ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code—with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent—depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code. ... Overall, ChatGPT was fairly good at solving problems in the different coding languages—but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. “However, when it comes to the algorithm problems after 2021, ChatGPT’s ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems,” Tang notes.


What can devs do about code review anxiety?

A lot of folks reported that either they would completely avoid picking up code reviews, for example. So maybe someone's like, “Hey, I need a review,” and folks are like, “I'm just going to pretend I didn't see that request. Maybe somebody else will pick it up.” So just kind of completely avoiding it because this anxiety refers to not just getting your work reviewed, but also reviewing other people's work. And then folks might also procrastinate, they might just kind of put things off, or someone was like, “I always wait until Friday so I don't have to deal with it all weekend and I just push all of that until the very last minute.” So definitely you see a lot of avoidance. ... there is this misconception that only junior developers or folks just starting out experience code review anxiety, with the assumption that it's only because you're experiencing the anxiety when your work is being reviewed. But if you think about it, anytime you are a reviewer, you're essentially asked to contribute your expertise and so there is an element of, “If I mess up this review, I was the gatekeeper of this code. And if I mess it up, that might be my fault.” So there's a lot of pressure there.
 

Securing the Growing IoT Threat Landscape

What’s clear is that there should be greater collective responsibility between stakeholders to improve IoT security outlooks. A multi-stakeholder response is necessary, leading to manufacturers prioritising security from the design phase, to governments implementing legislation to mandate responsibility. Currently, some of the leading IoT issues relate to deployment problems. Alex suggests that IT teams also need to ensure default device passwords are updated and complex enough to not be easily broken. Likewise, he highlights the need for monitoring to detect malicious activity. “Software and hardware hygiene is essential, especially as IoT devices are often built on open source software, without any convenient, at scale, security hardening and update mechanisms,” he highlights. “Identifying new or known vulnerabilities and having an optimised testing and deployment loop is vital to plug gaps and prevent entry from bad actors.” A secure-by-design approach should ensure more robust protections are in place, alongside patching and regular maintenance. Alongside this, security features should be integrated from the start of the development process.


Beyond GPUs: Innatera and the quiet uprising in AI hardware

“Our neuromorphic solutions can perform computations with 500 times less energy compared to conventional approaches,” Kumar stated. “And we’re seeing pattern recognition speeds about 100 times faster than competitors.” Kumar illustrated this point with a compelling real-world application. ... Kumar envisions a future where neuromorphic chips increasingly handle AI workloads at the edge, while larger foundational models remain in the cloud. “There’s a natural complementarity,” he said. “Neuromorphics excel at fast, efficient processing of real-world sensor data, while large language models are better suited for reasoning and knowledge-intensive tasks.” “It’s not just about raw computing power,” Kumar observed. “The brain achieves remarkable feats of intelligence with a fraction of the energy our current AI systems require. That’s the promise of neuromorphic computing – AI that’s not only more capable but dramatically more efficient.” ... As AI continues to diffuse into every facet of our lives, the need for more efficient hardware solutions will only grow. Neuromorphic computing represents one of the most exciting frontiers in chip design today, with the potential to enable a new generation of intelligent devices that are both more capable and more sustainable.


Artificial intelligence in cybersecurity and privacy: A blessing or a curse?

AI helps cybersecurity and privacy professionals in many ways, enhancing their ability to protect systems, data, and users from various threats. For instance, it can analyse large volumes of data, spot anomalies, and identify suspicious patterns for threat detection, which helps to find unknown or sophisticated attacks. AI can also defend against cyber-attacks by analysing and classifying network data, detecting malware, and predicting vulnerabilities. ... The harmful effects of AI may be fewer than the positive ones, but they can have a serious impact on organisations that suffer from them. Clearly, as AI technology advances, so do the strategies for both protecting and compromising digital systems. Security professionals should not ignore the risks of AI, but rather prepare for them by using AI to enhance their capabilities and reduce their vulnerabilities. ... As attackers are increasingly leveraging AI, integrating AI defences is crucial to stay ahead in the cybersecurity game. Without it, we risk falling behind.” Consequently, cybersecurity and privacy professionals, and their organisations, should prepare for AI-driven cyber threats by adopting a multi-faceted approach to enhance their defences while minimising risks and ensuring ethical use of technology.


Intel is betting big on its upcoming Lunar Lake XPUs to change how we think of AI in our PCs

Designed with power efficiency in mind, the Lunar Lake architecture is ideal for portable devices such as laptops and notebooks. These processors balance performance and efficiency by integrating Performance Cores (P-cores) and Efficiency Cores (E-cores). This combination allows the processors to handle both demanding tasks and less intensive operations without draining the battery. The Lunar Lake processors will feature a configuration of up to eight cores, split equally between P-cores and E-cores. This design aims to improve battery life by up to 60 per cent, positioning Lunar Lake as a strong competitor to ARM-based CPUs in the laptop market. Intel anticipates that these will be the most efficient x86 processors it has ever developed. ... A major highlight of the Lunar Lake processors is the inclusion of the new Xe2 GPUs as integrated graphics. These GPUs are expected to deliver up to 80 per cent better gaming performance compared to previous generations. With up to eight second-generation Xe-cores, the Xe2 GPUs are designed to support high-resolution gaming and multimedia tasks, including handling up to three 4K displays at 60 frames per second with HDR.


Cyber Threats And The Growing Complexity Of Cybersecurity

Irvine envisions a future where the cybersecurity industry undergoes significant disruption, with a greater emphasis on data-driven risk management. “The cybersecurity industry is going to be disrupted severely. We start to think about cybersecurity more as a risk and we start to put more data and more dollars and cents around some of these analyses,” she predicted. As the industry matures, Dr. Irvine anticipates a shift towards more transparent and effective cybersecurity solutions, reducing the prevalence of smoke and mirrors in the marketplace. She also claims that “AI and LLM's will take over jobs. There will be automation, and we're going to need to upskill individuals to solve some of these hard problems. It's just a challenge for all of us to figure out how.” Kosmowski also remarked that the industry must remain on top of what will continue to be a definitive risk to organizations, “Over 86% of companies are hybrid and expect to remain hybrid for the foreseeable future, plus we know IT proliferation is continuing to happen at a pace that we have never seen before.”


The blueprint for data center success: Documentation and training

In any data center, knowledge is a priceless asset. Documenting configurations, network topologies, hardware specifications, decommissioning regulations, and other items mentioned above ensures that institutional knowledge is not lost when individuals leave the organization. So, no need to panic once the facility veteran retires, as you’ll already have all the information they have! This information becomes crucial for staff, maintenance personnel, and external consultants to understand every facet of the systems quickly and accurately. It provides a more structured learning path, facilitates a deeper understanding of the data center's infrastructure and operations, and allows facilities to keep up with critical technological advances. By creating a well-documented environment, facilities can rest assured knowing that authorized personnel are adequately trained, and vital knowledge is not lost in the shuffle, contributing to overall operational efficiency and effectiveness, and further mitigating future risks or compliance violations.


Why Knowledge Is Power in the Clash of Big Tech’s AI Titans

The advanced AI models currently under development across big tech -- models designed to drive the next class of intelligent applications -- must learn from more extensive datasets than the internet can provide. In response, some AI developers have turned to experimenting with AI-generated synthetic data, a risky proposition that could potentially put an entire engine at risk if even a small semblance of the learning model is inaccurate. Others have pivoted to content licensing deals for access to useful, albeit limited, proprietary training data. ... The real differentiating edge lies in who can develop a systemic means of achieving GenAI data validation, integrity, and reliability with a certificated or “trusted” designation, in addition to acquiring expert knowledge from trusted external data and content sources. These two twin pillars of AI trust, coupled with the raw computing and computational power of new and emerging data centers, will likely be the markers of which big tech brands gain the immediate upper hand.


Should Sustainability be a Network Issue?

The beauty of replacing existing network hardware components with energy-efficient, eco-friendly, small form factor infrastructure elements wherever possible is that no adjustments have to be made to network configurations and topology. In most cases, you're simply swapping out routers, switches, etc. The need for these equipment upgrades naturally occurs with the move to Wi-Fi 6, which requires new network switches, routers, etc., in order to run at full capacity. Hardware replacements can be performed on a phased plan that commits a portion of the annual budget each year for network hardware upgrades ... There is a need in some cases to have discrete computer networks that are dedicated to specific business functions, but there are other cases where networks can be consolidated so that resources such as storage and processing can be shared. ... Network managers aren’t professional sustainability experts—but local utility companies are. In some areas of the U.S., utility companies offer free onsite energy audits that can help identify areas of potential energy and waste reduction.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - July 04, 2024

Understanding collective defense as a route to better cybersecurity

Organizations invoking collective defense to protect their IT and data assets will usually focus on sharing threat intelligence and coordinating threat response actions to counter malicious threat actors. Success depends on defining and implementing a collaborative cybersecurity strategy where organizations, both internally and externally, work together across industries to defend against targeted cyber threats. ... Putting this into practice requires organizations to commit to coordinating their cybersecurity strategies to identify, mitigate and recover from threats and breaches. This should begin with a process that defines the stakeholders who will participate in the collective defense initiative. These can include anything from private companies and government agencies to non-profits and Information Sharing and Analysis Centers (ISACs), among others. The approach will only work if it is based on mutual trust, so there is an important role for the use of mechanisms such as non-disclosure agreements, clearly defined roles and responsibilities and a commitment to operational transparency. 


Meaningful Ways to Reward Your IT Team and Its Achievements

With technology rapidly advancing, it's more important than ever to invest in personalized IT team skill development and employee well-being programs, which are a win-win for employees and the companies they work for, says Carrie Rasmussen, CIO at human resources software provider Dayforce, in an email interview. ... Synchronize rewards to project workflows, Felker recommends. If it's a particularly difficult time for the team -- tight deadlines, major changes, and other pressing issues -- he suggests scheduling rewards prior to the work's completion to boost motivation. "Having the team get a boost mid-stream on a project is likely to create an additional reservoir of mental energy they can draw from as the project continues," Felker says. ... It's also important to celebrate success whenever possible and to acknowledge that the outcome was the direct result of great teamwork. "Five minutes of recognition from the CEO in a company update or other forum motivates not only the IT team but the rest of the organization to strive for recognition," Nguyen says. He also advises promoting significant team achievements on LinkedIn and other major social platforms. "This will aid recruiting and retention efforts."


Deepfake research is growing and so is investment in companies that fight it

Manipulating human likeness, such as creating deepfake images, video and audio of people, has become the most common tactic for misusing generative AI, a new study from Google reveals. The most common reason to misuse the technology is to influence public opinion – including swaying political opinion – but it is also finding its way in scams, frauds or other means of generating profit. ... Impersonations of celebrities or public figures, for instance, are often used in investment scams while AI-generated media can also be generated to bypass identity verification and conduct blackmail, sextortion and phishing scams. As the primary data is media reports, the researchers warn that the perception of AI-generated misuse may be skewed to the ones that attract headlines. But despite concerns that sophisticated or state-sponsored actors will use generative AI, many of the cases of misuse were found to rely on popular tools that require minimal technical skills. ... With the threat of deepfakes becoming widespread, some companies are coming up with novel solutions that protect images online.


Building Finance Apps: Best Practices and Unique Challenges

By making compliance a central focus from day one of the development process, you maximize your ability to meet compliance needs, while also avoiding the inefficient process of retrofitting compliance features into the app later. For example, implementing transaction reporting after the rest of the app has been built is likely to be a much heavier lift than designing the app from the start to support that feature. ... The tech stack (meaning the set of frameworks and tools you use to build and run your app) can have major implications for how easy it is to build the app, how secure and reliable it is, and how well it integrates with other systems or platforms. For that reason, you'll want to consider your stack carefully, and avoid the temptation to go with whichever frameworks or tools you know best or like the most. ... Given the plethora of finance apps available today, it can be tempting to want to build fancy interfaces or extravagant features in a bid to set your app apart. In general, however, it's better to adopt a minimalist approach. Build the features your users actually want — no more, no less. Otherwise, you waste time and development resources, while also potentially exposing your app to more security risks.


OVHcloud blames record-breaking DDoS attack on MikroTik botnet

Earlier this year, OVHcloud had to mitigate a massive packet rate attack that reached 840 Mpps, surpassing the previous record holder, an 809 Mpps DDoS attack targeting a European bank, which Akamai mitigated in June 2020. ... OVHcloud says many of the high packet rate attacks it recorded, including the record-breaking attack from April, originate from compromised MirkoTik Cloud Core Router (CCR) devices designed for high-performance networking. The firm identified, specifically, compromised models CCR1036-8G-2S+ and CCR1072-1G-8S+, which are used as small—to medium-sized network cores. Many of these devices exposed their interface online, running outdated firmware and making them susceptible to attacks leveraging exploits for known vulnerabilities. The cloud firm hypothesizes that attackers might use MikroTik's RouterOS's "Bandwidth Test" feature, designed for network throughput stress testing, to generate high packet rates. OVHcloud found nearly 100,000 Mikrotik devices that are reachable/exploitable over the internet, making up for many potential targets for DDoS actors.


Set Goals and Measure Progress for Effective AI Deployment

Combining human expertise and AI capabilities to augment decision-making is an essential tenet in responsible AI principles. The current age of AI adoption should be considered a “coming together of humans and technology.” Humans will continue to be the custodians and stewards of data, which ties into Key Factor 2 about the need for high-quality data, as humans can help curate the relevant data sets to train an LLM. This is critical, and the “human-in-the-loop” facet should be embedded in all AI implementations to avoid completely autonomous implementations. Apart from data curation, this allows humans to take more meaningful actions when equipped with relevant insights, thus achieving better business outcomes. ... Addressing bias, privacy, and transparency in AI development and deployment is the pivotal metric in measuring its success. Like any technology, laying out guardrails and rules of engagement are core to this factor. Enterprises such as Accenture implement measures to detect and prevent bias in their AI recruitment tools, helping to ensure fair hiring practices. 


Site Reliability Engineering State of the Union for 2024

Automation remains at the core of SRE, with tools for container orchestration and infrastructure management playing a critical role. The adoption of containerization technologies such as Docker and Kubernetes has facilitated more efficient deployment and scaling of applications. In 2024, we can expect further advancements in automation tools that streamline the orchestration of complex microservices architectures, thereby reducing the operational burden on SRE teams. Infrastructure automation and orchestration are pivotal in the realm of SRE, enabling teams to manage complex systems with enhanced efficiency and reliability. The evolution of these technologies, particularly with the advent of containerization and microservices, has significantly transformed how applications are deployed, managed and scaled. ... With the increasing prevalence of cyberthreats and the tightening of regulatory requirements, security and compliance have become integral aspects of SRE. Automated tools for compliance monitoring and enforcement will become indispensable, enabling organizations to adhere to industry standards while minimizing the risk of data breaches and other security incidents.


5 Steps to Refocus Your Digital Transformation Strategy for Strategic Advancement

A strategy built around customer value provides measurable outcomes and drives deeper engagement and loyalty. The digital landscape is riddled with risks and opportunities due to rapid technological advancements, especially in data-centric AI. Businesses must stay agile, continually evaluating the risks and rewards of new technologies while maintaining a sharp focus on how these enhancements serve their customer base. ... Organizations with a customer advisory board should leverage it to gain insights directly from those who use their services or products. Engaging customers from the early stages of planning ensures that their feedback and needs directly influence the transformation strategy, leading to more accurate and beneficial implementations. ... One significant mistake IT leaders make is prioritizing technology over customer needs. While technology is a crucial enabler, it should not dictate the strategy. Instead, it should support and enhance the strategy’s core aim — serving the customer. IT leaders must ensure that digital initiatives align with broader business objectives and directly contribute to customer satisfaction and business efficiency.


OpenSSH Vulnerability “regreSSHion” Grants RCE Access Without User Interaction, Most Dangerous Bug in Two Decades

The good news about the OpenSSH vulnerability is that exploitation attempts have not yet been spotted in the wild. Successfully taking advantage of the exploit required about 10,000 tries to win a race condition using 100 concurrent connections under the researcher’s test conditions, or about six to eight hours to RCE due to obfuscation of ASLR glibc’s address. The attack will thus likely be limited to those wielding botnets when it is uncovered by threat actors. Given the large amount of simultaneous connections needed to induce the race condition, the RCE is also very open to being detected and blocked by firewalls and networking monitoring tools. Qualys’ immediate advice for mitigation also includes updating network-based access controls and segmenting networks where possible. ... “While there is currently no proof of concept demonstrating this vulnerability, and it has only been shown to be exploitable under controlled lab conditions, it is plausible that a public exploit for this vulnerability could emerge in the near future. Hence it’s strongly advised to patch this vulnerability before this becomes the case”.


New paper: AI agents that matter

So are AI agents all hype? It’s too early to tell. We think there are research challenges to be solved before we can expect agents such as the ones above to work well enough to be widely adopted. The only way to find out is through more research, so we do think research on AI agents is worthwhile. One major research challenge is reliability — LLMs are already capable enough to do many tasks that people want an assistant to handle, but not reliable enough that they can be successful products. To appreciate why, think of a flight-booking agent that needs to make dozens of calls to LLMs. If each of those went wrong independently with a probability of, say, just 2%, the overall system would be so unreliable as to be completely useless (this partly explains some of the product failures we’ve seen). ... Right now, however, research is itself contributing to hype and overoptimism because evaluation practices are not rigorous enough, much like the early days of machine learning research before the common task method took hold. That brings us to our paper.



Quote for the day:

"You can’t fall if you don’t climb. But there’s no joy in living your whole life on the ground." -- Unknown

Daily Tech Digest - June 29, 2024

Urban Digital Twins: AI Comes To City Planning

Urban digital twin technology involves various tools and methods at each lifecycle phase, and because it is still an emerging field, there's a wide range of variability in available solutions. Different providers may focus on different aspects of the technology, offer varying levels of complexity, or specialize in specific use cases or lifecycle phases. Therefore, it's essential for organizations to carefully evaluate their requirements and compare the offerings of different providers to find the best fit for their specific needs. To make the most of urban digital twin technology, city officials and urban planners should first get a solid grasp on what it can do and the benefits it offers throughout a city's development. By aligning city goals to the capabilities of digital twin solutions at each lifecycle stage, teams can make sure they're picking the right tools for their specific needs. This way, cities can tailor their approach to urban digital twins, ensuring they're making the best choices to reach their desired outcomes and create a smarter, more efficient urban environment.


Empowering Citizen Developers With Low- and No-Code Tools

Whether you are building your own codeless platform or adopting a ready-to-use solution, the benefits can be immense. But before you begin, remember that the core of any LCNC platform is the ability to transform a user's visual design into functional code. This is where the real magic happens, and it's also where the biggest challenges lie. For an LCNC platform to help you achieve success, you need to start with a deep understanding of your target users. What are their technical skills? What kind of applications do they want to use? The answers to these questions will inform every aspect of your platform's design, from the user interface/user experience (UI/UX) to the underlying architecture. The UI/UX is crucial for the success of any LCNC platform, but it is just the tip of the iceberg. Under the hood, you'll need a powerful engine that can translate visual elements into clean, efficient code. This typically involves complex AI algorithms, data structures, and a deep understanding of various programming languages. 


Will AI replace cybersecurity jobs?

While AI and ML can streamline many cybersecurity processes, organizations cannot remove the human element from their cyberdefense strategies. Despite their capabilities, these technologies have limitations that often require human insight and intervention, including a lack of contextual understanding and susceptibility to inaccurate results, adversarial attacks and bias. Because of these limitations, organizations should view AI as an enhancement, not a replacement, for human cybersecurity expertise. AI can augment human capabilities, particularly when dealing with large volumes of threat data, but it cannot fully replicate the contextual understanding and critical thinking that human experts bring to cybersecurity. ... AI can automate threat detection and analysis by scanning massive volumes of data in real time. AI-powered threat detection tools can swiftly identify and respond to cyberthreats, including emerging threats and zero-day attacks, before they breach an organization's network. AI tools can also combat insider threats, a significant concern for modern organizations.


Decoding OWASP – A Security Engineer’s Roadmap to Application Security

While the OWASP Top 10 provides a foundational framework for understanding and addressing the most critical web application security risks, OWASP offers a range of other resources that can be instrumental in developing and refining an application security strategy. These include the OWASP Testing Guide, Cheat Sheets, and a variety of tools and projects designed to aid in the practical aspects of security implementation. OWASP Testing Guide – The OWASP Testing Guide is a comprehensive resource that offers a deep dive into the specifics of testing web applications for security vulnerabilities. It covers a wide array of potential vulnerabilities beyond the Top 10, providing guidance on how to rigorously test and validate each one. ... OWASP Cheat Sheets – The OWASP Cheat Sheets are concise, focused guides containing the best practices on a specific security topic. They serve as handy guides for security teams and developers to quickly reference when implementing security measures.Cheat sheets can be used as training materials to educate developers and security professionals on specific security issues and how to mitigate them.


Intel Demonstrates First Fully Integrated Optical I/O Chiplet

The fully Integrated OCI chiplet leverages Intel’s field-proven silicon photonics technology and integrates a silicon photonics integrated circuit (PIC), which includes on-chip lasers and optical amplifiers, with an electrical IC. The OCI chiplet demonstrated at OFC was co-packaged with an Intel CPU but can also be integrated with next-generation CPUs, GPUs, IPUs and other system-on-chips (SoCs). This first OCI implementation supports up to 4 terabits per second (Tbps) bidirectional data transfer, compatible with peripheral component interconnect express (PCIe) Gen5. The live optical link demonstration showcases a transmitter (Tx) and receiver (Rx) connection between two CPU platforms over a single-mode fiber (SMF) patch cord. ... The current chiplet supports 64 channels of 32 Gbps data in each direction up to 100 meters (though practical applications may be limited to tens of meters due to time-of-flight latency), utilizing eight fiber pairs, each carrying eight dense wavelength division multiplexing (DWDM) wavelengths.


Artificial General Intelligence (AGI): Understanding the Milestones

It was first proposed in the early 1900s to create a machine or a program that was capable of thinking and acting more like a person. The Turing Test, designed by Alan Turing in 1950 to assess intelligence comparable to that of humans, established the scenario. ... Machine learning emerged in the 1950s and 1960s as a result of statistical algorithms that could identify patterns in data and use them to make future decisions without external supervision. ... The Expert systems and symbolic AI centered on the encoding of knowledge and the application of rules and symbols in human reasoning. ... Deep learning, a subset of machine learning, has been a crucial breakthrough in the journey toward AGI. In tasks like speech and image recognition, Convolutional Neural Networks and Recurrent Neural Networks perform at a human-level of intelligence. ... AGI research has produced numerous important results, ranging from theoretical foundations to deep learning advances. Even if AGI remains ideal, present AI research is pushing the envelope, imagining a time when AI would fundamentally revolutionize our way of life and work for the better.


Unlocking Innovation: How Critical Thinking Supercharges Design Thinking

Critical thinking involves the objective analysis and evaluation of an issue to form a judgment. It's about questioning assumptions, discerning hidden values, evaluating evidence, and assessing conclusions. This methodical approach is crucial in professional environments for making informed decisions, solving complex problems, and planning strategically. ... Design thinking is a human-centered approach to innovation that integrates the needs of people, the possibilities of technology, and the requirements for business success. It involves five key stages: Empathize, Define, Ideate, Prototype, and Test. Design thinking promotes creativity, collaborative effort, and iterative learning. Merging critical thinking into the design thinking process enhances each stage with thorough analysis and robust evaluation, leading to innovative and effective solutions. ... Critical thinking provides the analytical rigor needed to identify core issues and evaluate solutions, while design thinking fosters creativity and user-centered design.


DAST Vs. Penetration Testing: Comprehensive Guide to Application Security Testing

Dynamic Application Security Testing (DAST) and penetration testing are crucial for identifying and mitigating security vulnerabilities in web application security. While both aim to enhance application security, they differ significantly in their approach, execution, and outcomes. ... Dynamic Application Security Testing (DAST) is an automated security testing methodology that interacts with a running web application to identify potential security vulnerabilities. DAST tools simulate real-world attacks by injecting malicious code or manipulating data, focusing on uncovering vulnerabilities that attackers could exploit. DAST evaluates the effectiveness of security controls within the application. ... Penetration testing is a security assessment process by skilled professionals, often called ethical hackers. While comprehensive and carried out by experienced professionals, manual testing can be time-consuming and expensive. These experts simulate real-world attacks to identify and exploit application, network, or system vulnerabilities. 


There is no OT apocalypse, but OT security deserves more attention

The whole narrative surrounding attacks on OT environments is therefore quite exaggerated as far as Van der Walt is concerned. “We are not in the OT apocalypse,” in his words. This is important to know, he believes. “In fact, there is a narrative in the market that is out to get organizations to take action and invest.” In other words, we hear more and more that OT environments are under constant attack. At the end of the day, these are actually attacks on organizations’ IT environments. ... “There does exist a very frightening risk that attackers can take over the OT environment,” as Derbyshire puts it. To demonstrate that, he has set up an attack and published about it in scientific circles. This should result in a better understanding of a real OT ransomware attack. ... Finally, it is worth noting that OT security does need more attention.  Above all, they want to contribute to the discussion about what an OT attack really is. As Van der Walt summarizes, “IT security has been around for about 25 years, OT security is still very young. We should have learned from our mistakes, so it shouldn’t take another 25 years to get OT security to where IT security is today.”


Manage AI threats with the right technology architecture

Amidst the dynamic market conditions, choosing a future-proof technology architecture for threat management becomes almost inevitable. This underscores the necessity of selecting the best technologies and the right strategic approach. ... The best-of-breed approach allows companies to respond flexibly to new threats and changes in business requirements. When a new technology comes to market, companies can easily integrate it without overhauling their entire security architecture. This promotes agile adaptation and quick implementation of new solutions to stay current with the latest technology. ... Managing an integrated platform is less complex than managing multiple independent systems. This reduces the training requirements for security staff and minimizes the risk of errors arising from the complexity of integrating different systems. ... Ultimately, the choice should efficiently meet the company’s security goals. It is crucial to invest in advanced technologies and ensure that expenditures are proportionate to the risk. This means that investments should be carefully weighed without incurring unnecessary costs.



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward