Showing posts with label neuromorphic. Show all posts
Showing posts with label neuromorphic. Show all posts

Daily Tech Digest - September 09, 2025


Quote for the day:

“The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things.” -- Ronald Reagan


Neuromorphic computing and the future of edge AI

While QC captures the mainstream headlines, neuromorphic computing has positioned itself as a force in the next era of AI. While conventional AI relies heavily on GPU/TPU-based architectures, neuromorphic systems mimic the parallel and event-driven nature of the human brain. ... Neuromorphic hardware has shown promise in edge environments where power efficiency, latency and adaptability matter most. From wearable medical devices to battlefield robotics, systems that can “think locally” without requiring constant cloud connectivity offer clear advantages. ... As neuromorphic computing matures, ethical and sustainability considerations will shape adoption as much as raw performance. Spiking neural networks’ efficiency reduces carbon footprints by cutting energy demands compared to GPUs, aligning with global decarbonization targets. At the same time, ensuring that neuromorphic models are transparent, bias‑aware and auditable is critical for applications in healthcare, defense and finance. Calls for AI governance frameworks now explicitly include neuromorphic AI, reflecting its potential role in high‑stakes decision‑making. Embedding sustainability and ethics into the neuromorphic roadmap will ensure that efficiency gains do not come at the cost of fairness or accountability.


10 security leadership career-killers — and how to avoid them

“Security has evolved from being the end goal to being a business-enabling function,” says James Carder, CISO at software maker Benevity. “That means security strategies, communications, planning, and execution need to be aligned with business outcomes. If security efforts aren’t returning meaningful ROI, CISOs are likely doing something wrong. Security should not operate as a cost center, and if we act or report like one, we’re failing in our roles.” ... CISOs generally know that the security function can’t be the “department of no.” But some don’t quite get to a “yes,” either, which means they’re still failing their organizations in a way that could stymie their careers, says Aimee Cardwell, CISO in residence at tech company Transcend and former CISO of UnitedHealth Group. ... CISOs who are too rigid with the rules do a disservice to their organizations and their professional prospects, says Cardwell. Such a situation recently came up in her organization, where one of her team members initially declined to permit a third-party application from being used by workers, pointing to a security policy barring such apps. ... CISOs who don’t have a firm grasp on all that they must secure won’t succeed in their roles. “If they don’t have visibility, if they can’t talk about the effectiveness of the controls, then they won’t have credibility and the confidence in them among leadership will erode,” Knisley says.


A CIO's Evolving Role in the Generative AI Era

The dual mandate facing CIOs today is demanding but unavoidable. They must deliver quick AI pilots that boards can take to the shareholders while also enforcing guardrails on security, ethics and cost aspects. Too much caution can make CIOs irrelevant. This balancing act requires not only technical fluency but also narrative skill. The ability to translate AI experiments into business outcomes that CEOs and boards can trust can make CIOs a force. The MIT report highlights another critical decision point: whether to build or buy. Many enterprises attempt internal builds, but externally built AI partnerships succeed twice as often. CIOs, pressured for fast results, must be pragmatic about when to build and when to partner. Gen AI does not - and never will - replace the CIO role. But it demands corrections. The CIO who once focused on alignment must now lead business transformation. Those who succeed will act less as CIOs and more as AI diplomats, bridging hype with pragmatism, connecting technological opportunities to shareholder value and balancing the boardroom's urgency with the operational reality. As AI advances, so does the CIO's role - but only if they evolve. Their reporting line to the CEO symbolizes greater trust and higher stakes. Unlike previous technology cycles, AI has brought the CIO to the forefront of transformation. 


Building an AI Team May Mean Hiring Where the Talent Is, Not Where Your Bank Is

Much of the adaptation of banking to AI approaches requires close collaboration between AI talent with people who understand how the banking processes involved need to work. This will put people closer together, literally, to facilitate both quick and in-depth but always frequent interactions to make collaboration work — paradoxically, increased automation needs more face-to-face dealings at the formative stages. However, the "where" of the space will also hinge on where AI and innovation talent can be recruited, where that talent is being bred and wants to work, and the types of offices that talent will be attracted to. ... "Banks are also recruiting for emerging specialties in responsible AI and AI governance, ensuring that their AI initiatives are ethical, compliant and risk-managed," the report says. "As ‘agentic AI’ — autonomous AI agents — and generative AI gain traction, firms will need experts in these cutting-edge fields too." ... Decisions don’t stop at the border anymore. Jesrani says that savvy banks look for pockets of talent as well. ... "Banks are contemplating their global strategies because emerging markets can provide them with talent and capabilities that they may not be able to obtain in the U.S.," says Haglund. "Or there may be things happening in those markets that they need to be a part of in order to advance their core business capabilities."


How Data Immaturity is Preventing Advanced AI

Data immaturity, in the context of AI, refers to an organisation’s underdeveloped or inadequate data practices, which limit its ability to leverage AI effectively. It encompasses issues with data quality, accessibility, governance, and infrastructure. Critical signs of data immaturity include inconsistent, incomplete, or outdated data leading to unreliable AI outcomes; data silos across departments hindering access and comprehensive analysis, as well as weak data governance caused by a lack of policies on data ownership, compliance and security, which introduces risks and restricts AI usage. ... Data immaturity also leads to a lack of trust in analysis and predictability of execution. That puts a damper on any plans to leverage AI in a more autonomous manner—whether for business or operational process automation. A recent study by Kearney found that organisations globally are expecting to increase data and analytics budgets by 22% in the next three years as AI adoption scales. Fragmented data limits the predictive accuracy and reliability of AI, which are crucial for autonomous functions where decisions are made without human intervention. As a result, organisations must get their data houses in order before they will be able to truly take advantage of AI’s potential to optimise workflows and free up valuable time for humans to focus on strategy and design, tasks for which most AI is not yet well suited.


From Reactive Tools to Intelligent Agents: Fulcrum Digital’s AI-First Transformation

To mature, LLM is just one layer. Then you require the integration layer, how you integrate it. Every customer has multiple assets in their business which have to connect with LLM layers. Every business has so many existing applications and new applications; businesses are also buying some new AI agents from the market. How do you bring new AI agents, existing old systems, and new modern systems of the business together — integrating with LLM? That is one aspect. The second aspect is every business has its own data. So LLM has to train on those datasets. Copilot and OpenAI are trained on zillions of data, but that is LLM. Industry wants SLM—small language models, private language models, and industry-orientated language models. So LLMs have to be fine-tuned according to the industry and also fine-tuned according to their data. Nowadays people come to realise that LLMs will never give you 100 per cent accurate solutions, no matter which LLM you choose. That is the phenomenon customers and everybody are now learning. The difference between us and others: many players who are new to the game deliver results with LLMs at 70–75 per cent. Because we have matured this game with multiple LLMs coexisting, and with those LLMs together maturing our Ryze platform, we are able to deliver more than 93–95 per cent accuracy. 


You Didn't Get Phished — You Onboarded the Attacker

Many organizations respond by overcorrecting: "I want my entire company to be as locked down as my most sensitive resource." It seems sensible—until the work slows to a crawl. Without nuanced controls that allow your security policies to distinguish between legitimate workflows and unnecessary exposure, simply applying rigid controls that lock everything down across the organization will grind productivity to a halt. Employees need access to do their jobs. If security policies are too restrictive, employees are either going to find workarounds or continually ask for exceptions. Over time, risk creeps in as exceptions become the norm. This collection of internal exceptions slowly pushes you back towards "the castle and moat" approach. The walls are fortified from the outside, but open on the inside. And giving employees the key to unlock everything inside so they can do their jobs means you are giving one to Jordan, too. ... A practical way to begin is by piloting ZSP on your most sensitive system for two weeks. Measure how access requests, approvals, and audits flow in practice. Quick wins here can build momentum for wider adoption, and prove that security and productivity don't have to be at odds. ... When work demands more, employees can receive it on request through time-bound, auditable workflows. Just enough access is granted just in time, then removed. By taking steps to operationalize zero standing privileges, you empower legitimate users to move quickly—without leaving persistent privileges lying around for Jordan to find.


OT Security: When Shutting Down Is Not an Option

some of the most urgent and disruptive threats today are unfolding far from the keyboard, in operational technology environments that keep factories running, energy flowing and transportation systems moving. In these sectors, digital attacks can lead to physical consequences, and defending OT environments demands specialized skills. Real-world incidents across manufacturing and critical infrastructure show how quickly operations can be disrupted when OT systems are not adequately protected. Just this week, Jaguar Land Rover disclosed that a cyberattack "severely disrupted" its automotive manufacturing operations. ... OT environments present challenges that differ sharply from traditional IT. While security is improving, OT security teams must protect legacy control systems running outdated firmware, making them difficult to patch. Operators need to prioritize uptime and safety over system changes; and IT and OT teams frequently work in silos. These conditions mean that breaches can have physical as well as digital consequences, from halting production to endangering lives. Training tailored to OT is essential to secure critical systems while maintaining operational continuity. ... An OT cybersecurity learning ecosystem is not a one-time checklist but a continuous program. The following elements help organizations choose training that meets current needs while building capacity for ongoing improvement.


Connected cars are racing ahead, but security is stuck in neutral

Connected cars are essentially digital platforms with multiple entry points for attackers. The research highlights several areas of concern. Remote access attacks can target telematics systems, wireless interfaces, or mobile apps linked to the car. Data leaks are another major issue because connected cars collect sensitive information, including location history and driving behavior, which is often stored in the cloud. Sensors present their own set of risks. Cameras, radar, lidar, and GPS can be manipulated, creating confusion for driver assistance systems. Once inside a vehicle, attackers can move deeper by exploiting the CAN bus, which connects key systems such as brakes, steering, and acceleration. ... Most drivers want information about what data is collected and where it goes, yet very few said they have received that information. Brand perception also plays a role. Many participants prefer European or Japanese brands, while some expressed distrust toward vehicles from certain countries, citing political concerns, safety issues, or perceived quality gaps. ... Manufacturers are pushing out new software-defined features, integrating apps, and rolling out over the air updates. This speed increases the number of attack paths and makes it harder for security practices and rules to keep up.


Circular strategies for data centers

Digital infrastructure is scaling rapidly, with rising AI workloads and increased compute density shaping investment decisions. Growth on that scale can generate unnecessary waste unless sustainability is integrated into planning. Circular thinking makes it possible to expand capacity without locking facilities into perpetual hardware turnover. Operators can incorporate flexibility into refresh cycles by working with vendors that design modular platforms or by adopting service-based models that build in maintenance, refurbishment, and recovery. ... Sustainable planning also involves continuous evaluation. Instead of defaulting to wholesale replacement, facilities can test whether assets still meet operational requirements through reconfiguration, upgrades, or role reassignment. This kind of iterative approach gives operators a way to match innovation with responsibility, ensuring that capacity keeps pace with demand without discarding equipment prematurely. ... The transition to circular practices is more than an environmental gesture. For data centers, it is a strategic shift in how infrastructure is procured, maintained, and retired. Extending lifecycles, redeploying equipment internally, refurbishing where possible, and ensuring secure, responsible recycling at the end of use all contribute to a more resilient operation in a resource-constrained and tightly regulated industry.

Daily Tech Digest - September 14, 2024

Three Critical Factors for a Successful Digital Transformation Strategy

Just as important as the front-end experience are the back-end operations that keep and build the customer relationship. Value-added digital services that deliver back-end operational excellence can improve the customer experience through better customer service, improved security and more. Emerging tech like artificial intelligence can substantially improve how companies get a clearer view into their operations and customer base. Take data flow and management, for example. Many executives report they are swimming in information, yet around half admit they struggle analyzing it, according to research by Paynearme. While data is important, the insights derived from that data are key to the conclusions executives must draw. Maintaining a digital record of customer information, transaction history, spend behaviors and other metrics and applying AI to analyze and inform decisions can help companies provide better service and protect their end users. They can streamline customer service, for instance, by immediately sourcing relevant information and delivering a resolution in near-real time, or by automating the analysis of spend behavior and location data to shut down potential fraudsters.


AI reshaping the management of remote workforce

In a remote work setting, one of the biggest challenges for organizations remains in streamlining of operations. For a scattered team, the implementation of AI emerges as a revolutionary tool in automating shift and rostering using historical pattern analytics. Historical data on staff availability, productivity, and work patterns enable organizations to optimise schedules and strike a perfect balance between operational needs and employee preferences. Subsequently, this reduces conflicts and enhances overall work efficiency. Apart from this, AI analyses staff work duration and shifts that further enable organizations to predict staffing needs and optimise resource allocation. This enhances capacity modelling to ensure the right team member is available to handle tasks during peak times, preventing overstaffing or understaffing issues. ... With expanding use cases, AI-powered facial recognition technology has become a critical part of identity verification and promoting security in remote work settings. Organisations need to ensure security and confidentiality at all stages of their work. In tandem, AI-powered facial recognition ensures that only authorized personnel have access to the company’s sensitive systems and data. 


The DPDP act: Navigating digital compliance under India’s new regulatory landscape

Adapting to the DPDPA will require tailored approaches, as different sectors face unique challenges based on their data handling practices, customer bases, and geographical scope. However, some fundamental strategies can help businesses effectively navigate this new regulatory landscape. First, conducting a comprehensive data audit is essential. Businesses need to understand what data they collect, where it is stored, and who has access to it. Mapping out data flows allows organizations to identify risks and address them proactively, laying the groundwork for robust compliance. Appointing a Data Protection Officer (DPO) is another critical step. The DPO will be responsible for overseeing compliance efforts, serving as the primary point of contact for regulatory bodies, and handling data subject requests. While it’s not yet established whether it’s mandatory or not, it is safe to say that this role is vital for embedding a culture of data privacy within the organisation. Technology can also play a significant role in ensuring compliance. Tools such as Unified Endpoint Management (UEM) solutions, encryption technologies, and data loss prevention (DLP) systems can help businesses monitor data flows, detect anomalies, and prevent unauthorized access. 


10 Things To Avoid in Domain-Driven Design (DDD)

To prevent potential issues, it is your responsibility to maintain a domain model that is uncomplicated and accurately reflects the domain. This diligent approach is important to focus on modeling the components of the domain that offer strategic importance and to streamline or exclude less critical elements. Remember, Domain-Driven Design (DDD) is primarily concerned with strategic design and not with needlessly complexifying the domain model with unnecessary intricacies. ... It's crucial to leverage Domain-Driven Design (DDD) to deeply analyze and concentrate on the domain's most vital and influential parts. Identify the aspects that deliver the highest value to the business and ensure that your modeling efforts are closely aligned with the business's overarching priorities and strategic objectives. Actively collaborating with key business stakeholders is essential to gain a comprehensive understanding of what holds the greatest value to them and subsequently prioritize these areas in your modeling endeavors. This approach will optimally reflect the business's critical needs and contribute to the successful realization of strategic goals.


How to Build a Data Governance Program in 90 Days

With a new data-friendly CIO at the helm, Hidalgo was able to assemble the right team for the job and, at the same time, create an environment of maximum engagement with data culture. She assembled discussion teams and even a data book club that read and reviewed the latest data governance literature. In turn, that team assembled its own data governance website as a platform not just for sharing ideas but also to spread the momentum. “We kept the juices flowing, kept the excitement,” Hidalgo recalled. “And then with our data governance office and steering committee, we engaged with all departments, we have people from HR, compliance, legal product, everywhere – to make sure that everyone is represented.” ... After choosing a technology platform in May, Hidalgo began the most arduous part of the process: preparation for a “jumpstart” campaign that would kick off in July. Hidalgo and her team began to catalog existing data one subset of data at a time – 20 KPIs or so – and complete its business glossary terms. Most importantly, Hidalgo had all along been building bridges between Shaw’s IT team, data governance crew, and business leadership to the degree that when the jumpstart was completed – on time – the entire business saw the immense value-add of the data governance that had been built.


Varied Cognitive Training Boosts Learning and Memory

The researchers observed that varied practice, not repetition, primed older adults to learn a new working memory task. Their findings, which appear in the journal Intelligence, propose diverse cognitive training as a promising whetstone for maintaining mental sharpness as we age. “People often think that the best way to get better at something is to simply practice it over and over again, but robust skill learning is actually supported by variation in practice,” said lead investigator Elizabeth A. L. Stine-Morrow ... The researchers narrowed their focus to working memory, or the cognitive ability to hold one thing in mind while doing something else. “We chose working memory because it is a core ability needed to engage with reality and construct knowledge,” Stine-Morrow said. “It underpins language comprehension, reasoning, problem-solving and many sorts of everyday cognition.” Because working memory often declines with aging, Stine-Morrow and her colleagues recruited 90 Champaign-Urbana locals aged 60-87. At the beginning and end of the study, researchers assessed the participants’ working memory by measuring each person’s reading span: their capacity to remember information while reading something unrelated.


Why Cloud Migrations Fail

One stumbling block on the cloud journey is misunderstanding or confusion around the shared responsibility model. This framework delineates the security obligations of cloud service providers, or CSPs, and customers. The model necessitates a clear understanding of end-user obligations and highlights the need for collaboration and diligence. Broad assumptions about the level of security oversight provided by the CSP can lead to security/data breaches that the U.S. National Security Agency (NSA) notes “likely occur more frequently than reported.” It’s also worth noting that 82% of breaches in 2023 involved cloud data. The confusion is often magnified in cases of a cloud “lift-and-shift,” a method where business-as-usual operations, architectures and practices are simply pushed into the cloud without adaptation to their new environment. In these cases, organizations may be slow to implement proper procedures, monitoring and personnel to match the security limitations of their new cloud environment. While the level of embedded security can differ depending on the selected cloud model, the customer must often enact strict security and identity and access management (IAM) controls to secure their environment.


AI - peril or promise?

The interplay between AI data centers and resource usage necessitates innovative approaches to mitigate environmental impacts. Advances in cooling technology, such as liquid immersion cooling and the use of recycled water, offer potential solutions. Furthermore, utilizing recycled or non-potable water for cooling can alleviate the pressure on freshwater resources. Moreover, AI itself can be leveraged to enhance the efficiency of data centers. AI algorithms can optimize energy use by predicting cooling needs, managing workloads more efficiently, and reducing idle times for servers. Predictive maintenance powered by AI can also prevent equipment failures, thereby reducing the need for excessive cooling. This is good news as the sector continues to use AI to benefit from greater efficiencies, cost savings, driving improvements in services with the expected impact of AI on the operational side for data centres expected to be very positive. Over 65 percent of our survey respondents reported that their organizations are regularly using generative AI, nearly double the percentage from their 2023 survey and around 90 percent of respondents expect their data centers to be more efficient as a direct result of AI applications.


HP Chief Architect Recalibrates Expectations Of Practical Quantum Computing’s Arrival From Generations To Within A Decade

Hewlett Packard Labs is now adopting a holistic co-design approach, partnering with other organizations developing various qubits and quantum software. The aim is to simulate quantum systems to solve real-world problems in solid-state physics, exotic condensed matter physics, quantum chemistry, and industrial applications. “What is it like to actually deliver the optimization we’ve been promised with quantum for quite some time, and achieve that on an industrial scale?” Bresniker posed. “That’s really what we’ve been devoting ourselves to—beginning to answer those questions of where and when quantum can make a real impact.” One of the initial challenges the team tackled was modeling benzine, an exotic chemical derived from the benzene ring. “When we initially tackled this problem with our co-design partners, the solution required 100 million qubits for 5,000 years—that’s a lot of time and qubits,” Bresniker told Frontier Enterprise. Considering current quantum capabilities are in the tens or hundreds of qubits, this was an impractical solution. By employing error correction codes and simulation methodologies, the team significantly reduced the computational requirements.


New AI reporting regulations

At its core, the new proposal requires developers and cloud service providers to fulfill reporting requirements aimed at ensuring the safety and cybersecurity resilience of AI technologies. This necessitates the disclosure of detailed information about AI models and the platforms on which they operate. One of the proposal’s key components is cybersecurity. Enterprises must now demonstrate robust security protocols and engage in what’s known as “red-teaming”—simulated attacks designed to identify and address vulnerabilities. This practice is rooted in longstanding cybersecurity practices, but it does introduce new layers of complexity and cost for cloud users. Based on the negative impact of red-teaming on enterprises, I suspect it may be challenged in the courts. The regulation does increase focus on security testing and compliance. The objective is to ensure that AI systems can withstand cyberthreats and protect data. However, this is not cheap. Achieving this result requires investments in advanced security tools and expertise, typically stretching budgets and resources. My “back of the napkin” calculations figure about 10% of the system’s total cost.



Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - September 07, 2024

Why RAG Is Essential for Next-Gen AI Development

The success of RAG implementation often depends on a company’s willingness to invest in curating and maintaining high-quality knowledge sources. Failure to do this will severely impact RAG performance and may lead to LLM responses of much poorer quality than expected. Another difficult task that companies frequently run into is developing an effective retrieval mechanism. Dense retrieval, a semantic search technique, and learned retrieval, which involves the system recalling information, are two approaches that produce favorable results. Many companies need help integrating RAG into existing AI systems and scaling RAG to handle large knowledge bases. Potential solutions to these challenges include efficient indexing and caching and implementing distributed architectures. Another common problem is properly explaining the reasoning behind RAG-generated responses, as they often involve information taken from multiple sources and models. ... By integrating external knowledge sources, RAG helps LLMs prevail over the limitations of a parametric memory and dramatically reduce hallucinations. As Douwe Keila, an author of the original paper about RAG, said in a recent interview


A global assessment of third-party connection tampering

To be clear, there are many reasons a third party might tamper with a connection. Enterprises may tamper with outbound connections from their networks to prevent users from interacting with spam or phishing sites. ISPs may use connection tampering to enforce court or regulatory orders that demand website blocking to address copyright infringement or for other legal purposes. Governments may mandate large-scale censorship and information control. Despite the fact that everyone knows it happens, no other large operation has previously looked at the use of connection tampering at scale and across jurisdictions. We think that creates a notable gap in understanding what is happening in the Internet ecosystem, and that shedding light on these practices is important for transparency and the long-term health of the Internet. ... Ultimately, connection tampering is possible only by accident – an unintended side effect of protocol design. On the Internet, the most common identity is the domain name. In a communication on the Internet, the domain name is most often transmitted in the “server name indication (SNI)” field in TLS – exposed in cleartext for all to see.


The human brain deciphered and the first neural map created

The formation of such a neural map was made possible with the help of several technologies. First, as mentioned earlier, the employment of electron microscopy enabled the researchers to obtain images of the brain tissue at a scale that could capture details of synapses. Such papers provided the necessary level of detail to reveal how neurons are connected and can communicate with other neurons. Second, the massive volume of data produced by the imaging process needed high computing capability and machine learning to parse and analyze. It was also claimed that the company’s experience in AI and data processing was helpful in the correct positioning of the 2D images into a 3D one and in the proper segmentation of many of the parts of the brain tissue. Last of all, the decision to share the neural map as an open-access database has extended the potential for future research and cooperation in the sphere of neuroscience. The development of this neural map has excellent potential for neuroscience and other disciplines. In neuropharmacology, the map offers an opportunity to gain a substantial amount of information about how neurons are wired within the brain and how certain diseases, such as schizophrenia or autism, occur.


InfoQ AI, ML and Data Engineering Trends Report - September 2024

The AI-enabled agent programs are another area that’s seeing a lot of innovation. Autonomous agents and GenAI-enabled virtual assistants are coming up in different places to help software developers become more productive. AI-assisted programs can enable individual team members to increase productivity or collaborate with each other. Gihub’s Copilot, Microsoft Teams’ Copilot, DevinAI, Mistral’s Codestral, and JetBrains’ local code completion are some examples of AI agents. GitHub also recently announced its GitHub Models product to enable the large community of developers to become AI engineers and build with industry-leading AI models. ... With the emergence of multi-model language models like GPT-4o, privacy and security when handling non-textual data like videos become even more critical in the overall machine learning pipelines and DevOps processes. The podcast panelist’s AI safety and security recommendations are to have a comprehensive lineage and mapping of where your data is going. Train your employees to have proper data privacy security practices, and also make the secure path the path of least resistance for them so everyone within your organization easily adopts it.


Does it matter what kind of hard drive you use in a NAS?

Consumer drives aren't designed for heavier workloads, nor are they built with multiple units running adjacent to one another. This can cause issues with vibrations, particularly for 3.5-inch mechanical drives. Firmware and endurance are other concerns since the drives themselves won't be built with RAID and NAS in mind. Combining the two with heavier workloads through multiple user accounts and clients could lead to easier drive failure. These drives will be cheaper than their NAS equivalents, however, and no drive is immune to failure. You could see consumer drives outlive NAS drives inside the same enclosure. ... Shingled magnetic recording (SMR) and conventional magnetic recording (CMR) are two types of storage technologies used for storing data on spinning platters inside an HDD. CRM uses concentric circles (or tracks) for saving data, which are segmented into sectors. Everything is recorded linearly with each sector being written and read independently, allowing specific sectors to be rewritten without affecting any other sector on the drive. SMR is a newer technology that takes the same concentric circles approach but instead overlaps the tracks to bolster storage capacity but performance suffers alongside reliability.


What’s next in AI and HPC for IT leaders in digital infrastructure?

The AI nirvana for enterprises? In 2024, we'll see enterprises build ChatGPT-like GenAI systems for their own internal information resources. Since many companies' data resides in silos, there is a real opportunity to manage AI demand, build AI expertise, and cross-functional department collaboration. This access to data comes with an existential security risk that could strike at the heart of a company: intellectual property. That’s why in 2024, forward-thinking enterprises will use AI for robust data security and privacy measures to ensure intellectual property doesn’t get exposed on the public internet. They will also shrink the threat landscape by honing in on internal security risks. This includes the development of internal regulations to ensure sensitive information isn't leaked to non-privileged internal groups and individuals. ... At this early stage of AI initiatives, enterprises are dependent on technology providers and their partners to advise and support the global roll-out of AI initiatives. In Asia Pacific, it’s a race to build, deploy, and subsequently train the right AI clusters. Since a prime use case is cybersecurity threat detection, working with the respective cybersecurity technology providers is key.


Red Hat unleashes Enterprise Linux AI - and it's truly useful

In a statement, Joe Fernandes, Red Hat's Foundation Model Platform vice president, said, "RHEL AI provides the ability for domain experts, not just data scientists, to contribute to a built-for-purpose gen AI model across the hybrid cloud while also enabling IT organizations to scale these models for production through Red Hat OpenShift AI." RHEL AI isn't tied to any single environment. It's designed to run wherever your data lives -- whether it be on-premise, at the edge, or in the public cloud. This flexibility is crucial when implementing AI strategies without completely overhauling your existing infrastructure. The program is now available on Amazon Web Services (AWS) and IBM Cloud as a "bring your own (BYO)" subscription offering. In the next few months, it will be available as a service on AWS, Google Cloud Platform (GCP), IBM Cloud, and Microsoft Azure. Dell Technologies has announced a collaboration to bring RHEL AI to Dell PowerEdge servers. This partnership aims to simplify AI deployment by providing validated hardware solutions, including NVIDIA accelerated computing, optimized for RHEL AI.


Quantum computing is coming – are you ready?

The good thing is that awareness of the challenge is increasing. Some verticals, such as finance, have it absolutely top of mind with some already having quantum safe algorithms in production. Likewise, some manufacturing sectors are examining the impact, given the implications of having to upgrade embedded or IoT devices. And, of course, medical devices offer a particularly heightened security and trust challenge. "I think for these device manufacturers, they had a moment where they realized they can't go ahead and push the devices out as fast as they are without thinking about proper security," says Hojjati. But not everyone is on top of the problem. Which is why DigiCert is backing Quantum Readiness Day on September 26, to coincide with the expected finalization of the new algorithms by NIST. The worldwide event will bring together experts, both in how to break encryption and how to implement the upcoming post quantum algorithms, helping you make sure you're ahead of the problem. As Hojjati says, whether we've reached Q Day or not, "This is real, this is here, the standards have been released. ..."


How cyberattacks on offshore wind farms could create huge problems

Successful cyberattacks could lower public trust in wind energy and other renewables, the report from the Alan Turing Institute says. The authors add that artificial intelligence (AI) could help boost the resilience of offshore wind farms to cyber threats. However, government and industry need to act fast. The fact that offshore wind installations are relatively remote makes them particularly vulnerable to disruption. Land turbines can have nearby offices, so getting someone to visit the site is much easier than at sea. Offshore turbines tend to require remote monitoring and special technology for long distance communication. These more complicated solutions mean that things can go wrong more easily. ... Most cyberattacks are financially motivated, such as the ransomware attacks that have targeted the NHS in recent years. These typically block the users’ access to their computer data until a payment is made to the hackers. But critical infrastructure such as energy installations are also exposed. There may be various motivations for launching cyberattacks against them. One important possibility is that of a hostile state that wants to disrupt the UK’s energy supply – and perhaps also undermine public confidence in it.


Data Skills Gap Is Hampering Productivity; Is Upskilling the Answer?

"A well-crafted data strategy will highlight where specific skills need to be developed to achieve business objectives," said Michael Curry, president of data modernization at Rocket Software. He explained that since a data strategy typically involves both risk mitigation and value realization, it's important to consider skill gaps on both sides. Kjell Carlsson, head of AI strategy at Domino Data Labs, said better data prep, analysis, and visualization skills would help organizations become more data-driven and make better decisions that would significantly improve growth and curtail waste. "Imbuing your workforce with better prompt engineering skills will help them code, research, and write vastly more efficiently," he said. "A well-crafted data strategy will highlight where specific skills need to be developed to achieve business objectives," said Michael Curry, president of data modernization at Rocket Software. He explained that since a data strategy typically involves both risk mitigation and value realization, it's important to consider skill gaps on both sides. ... "Imbuing your workforce with better prompt engineering skills will help them code, research, and write vastly more efficiently," he said.



Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it." -- Marian Anderson

Daily Tech Digest - May 03, 2023

What You Need to Know About Neuromorphic Computing

Neuromorphic computing is a type of computer engineering that mimics the human brain and nervous system. “It's a hardware and software computing element that combines several specializations, such as biology, mathematics, electronics, and physics,” explains Abhishek Khandelwal, vice president, life sciences, at engineering consulting firm Capgemini Engineering. While current AI technology has become better at outperforming human capabilities in multiple fields, such as Level 4 self-driving vehicles and generative models, it still offers only a crude approximation of human/biological capabilities and is only useful in a handful of fields. ... Neuromorphic supporters believe the technology will lead to more intelligent systems. “Such systems could also learn automatically and self-regulate what to learn and where to learn from,” Natarajan says. Meanwhile, combining neuromorphic technology with neuro-prosthetics, (such as Neuralink) could lead to breakthroughs in prosthetic limb control and various other types of human assistive and augmented technologies.


How the influence of data and the metaverse will revolutionize businesses and industries

Today, business is all about data: collecting, storing, transforming, and analysing it to gain insights—to make decisions. Just like how ChatGPT requires massive amounts of data to create human-like language, businesses need data to augment human decision-making. From machine and building performance to energy and emissions, data is the crucial link between the physical and digital worlds. It’s also the key to solving efficiency and sustainability challenges that are now more urgent than ever. If the metaverse is meant to transform business and industries, it must be built on solid data foundations. ... Digital transformation started with connecting physical assets via IoT and edge controls. Its disruptive potential has proven to carry operational and energy efficiency across all levels of an enterprise. When we introduce powerful software capabilities and start leveraging the generated data, we can create virtual representations of the real world by combining simulation, augmented reality (AR), data sharing, and visualization all at once. 


Distributed Tracing Is Failing. How Can We Save It?

Engineers are to some degree creatures of habit. The engineering organizations I’ve spent time with have a deep level of comfort with dashboards, and statistics show that’s where engineers spend the most time — they provide data in an easy-to-understand graphical user interface (GUI) for engineers to quickly answer questions. However, it’s challenging when trace data is kept in its own silo. To access its value, an engineer must navigate away from their primary investigation to a separate place in the app — or worse, a separate app. Then the engineer must try to recreate whatever context they had when they determined that trace data could supplement the investigation. Over time, all but a few power users start to drift away from using the trace query page on a regular basis. Not because the trace query page is any less useful. It’s simply outside of the average engineer’s scope. It’s like a kitchen appliance with lots of uses when you’re cooking, but because it’s kept out of sight in the back of a drawer, you never think to use it — even if it’s the best tool for the job.


We’re Still in the ‘Wild West’ When it Comes to Data Governance, StreamSets Says

A lack of visibility into data pipelines raises the risk of other data security problems, the company says. “The research reveals that 48% of businesses can’t see when data is being used in multiple systems, and 40% cannot ensure data is being pulled from the best source,” it says. “Moreover, 54% cannot integrate pipelines with a data catalog, and 57% cannot integrate pipelines into a data fabric.” Who holds responsibility for cleaning up the data mess? Well, that’s another area with a bit of murkiness. About half (47%) of StreamSets survey respondents say the centralized IT team bears responsibility for managing the data. However, 18% said the line of business holds primary responsibility, while it’s split between the business and IT in 35% of cases. A second survey released by StreamSets last week highlights the difficulty in running data pipelines in the modern enterprise. Many companies have thousands of data pipelines in use and are hard pressed to build, manage, and maintain them at the pace required by the business, according to StreamSets.


Quantum computing: What are the data storage challenges?

One of the core challenges of quantum computers is that their storage systems are unsuitable for long-term storage due to quantum decoherence, the effect of which can build up over time. Decoherence occurs when quantum computing data is brought into existing data storage frameworks and causes qubits to lose their quantum status, resulting in corrupted data and data loss. “Quantum mechanical bits can’t be stored for long times as they tend to decay and collapse after a while,” says Weides. “Depending on the technology used, they can collapse within seconds, but the best ones are in a minute. You don’t really achieve 10 years of storage. ...” Quantum computers will need data storage during computation, but that needs to be a quantum memory for storing super-positioned or entangled states, and storage durations are going to present a challenge. So, it’s likely data storage for quantum computing will need to rely on conventional storage, such as in high-performance computing (HPC). Considering the massive financial investment required for quantum computing, to introduce a limitation of “cheap” data storage elements as a cost-saving exercise would be counter-productive.


7 speed bumps on the road to AI

There are many issues and debates that humans know to avoid in certain contexts, such as holiday dinners or the workplace. AIs, though, need to be taught how to handle such issues in every context. Some large language models are programmed to deflect loaded questions or just refuse to answer them, but some users simply won't let a sleeping dog lie. When such a user notices the AI dodging a tricky question, such as one that invokes racial or gender bias, they'll immediately look for ways to get under those guardrails. Bias in data and insufficient data are issues that can be corrected for over time, but in the meantime, the potential for mischief and misuse is huge. And, while getting AI to churn out hate speech is bad enough, the plot thickens considerably when we start using AI to explore the moral implications of real life decisions. Many AI projects depend on human feedback to guide their learning. Often, a project of scale needs a high volume of people to build the training set and adjust the model’s behavior as it grows. For many projects, the needed volume is only economically feasible if trainers are paid low wages in poor countries. 


7 ways to improve employee experience and workplace culture

The traditional hierarchical way of managing employees has been shown to be largely ineffective. Companies run as adhocracies are more productive as they foster knowledge sharing, workplace collaboration, and rapid adaptation—some of the most important attributes for companies in the knowledge-based age. By encouraging employees to be more self-sufficient and less dependent on their superiors, you can promote greater efficiency and effectiveness in the workplace. Start adopting more self-service options for employees. Modern IT and HR systems can be calibrated to your employees’ needs and enable them to help themselves, whether they want to book a vacation, access important documents, get a better screen, or access an enterprise app. Although hybrid and remote work seems to be the preferred model for many organizations, it still has disadvantages. Many remote and hybrid employees struggle to manage the blurred boundary between work and personal life, or the often less-than-ideal workplace setups.


What Does a Strong Agile Culture Look Like?

A strong culture is critical for Agile organizations to be successful. Agile requires organizations, and therefore its employees, to be ready to welcome changing requirements and inspect and adapt at any given moment. Teams are supposed to be self-managing and self-organizing. Stakeholders need to see working products frequently. Breaking that down, expectations are that projects change all the time but still need to be delivered in quick increments to stakeholders, all the while teams are managing themselves. ... Psychological safety in the workplace refers to the extent to which employees feel safe to speak up, share their ideas, and take risks without fear of negative consequences. It is the belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes. When there is psychological safety in the workplace, employees are more likely to be engaged, motivated, and productive. They are also more likely to collaborate, share their knowledge and expertise, and contribute to innovation.


9 ways to avoid falling prey to AI washing

It’s not uncommon for a company to acquire dubious AI solutions, and in such situations, the CIO may not necessarily be at fault. It could be “a symptom of poor company leadership,” says Welch. “The business falls for marketing hype and overrules the IT team, which is left to pick up the pieces.” To prevent moments like these, organizations need to foster a collaborative culture in which the opinion of tech professionals is valued and their arguments are listed thoroughly. At the same time, CIOs and tech teams should build their reputation within the company so their opinion is more easily incorporated into decision-making processes. To achieve that, they should demonstrate expertise, professionalism, and soft skills. “I don’t feel there’s a problem with detecting AI washing for the CIO,” says Max Kovtun, chief innovation officer at Sigma Software Group. “The bigger problem might be the push from business stakeholders or entrepreneurs to use AI in any form because they want to look innovative and cutting edge. So the right question would be how not to become an AI washer under the pressure of entrepreneurship.”


Skilling up the security team for the AI-dominated era

The increasing reliance of AI and machine learning models in all technological walks of life is expected to rapidly change the complexion of the threat landscape. Meanwhile, organically training security staff, bringing in AI experts who can be trained to aid in security activities, and evangelizing the hardening of AI systems will all take considerable runway. Experts share what security leaders will need to shape their skill base and prepare to face both sides of growing AI risk: risk to AI systems and risks from AI-based attacks. There is some degree of crossover in each domain. For example, machine learning and data science skills are going to be increasingly relevant on both sides. In both cases existing security skills in penetration testing, threat modeling, threat hunting, security engineering, and security awareness training will be as important as ever, just in the context of new threats. However, the techniques needed to defend against AI and to protect AI from attack also have their own unique nuances, which will in turn influence the make-up of the teams called to execute on those strategies.



Quote for the day:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni