Daily Tech Digest - October 22, 2023

The AI Evolution Will Happen Faster Than Computing Evolution

Compute will still evolve, but how fast is the question? The Internet led to the massively distributed data center approach, which we know as the cloud, a terrible term, but I digress. But today the power of computing can only increase so much. Moore’s Law looks increasingly impossible to keep pace with as we develop transistors the size of an atom. Infrastructure limitations are causing all sorts of headaches for software vendors who now face a litany of options for maximizing AI systems to be more efficient with precious compute resources. ... It’s all about the data and its compounding growth. Having transition data ready and analytic, too, with speed and efficiency, makes for the ability to scale AI systems. As we’ve seen, AI systems must be fast and SingleStore markets that capability with its in-memory and disk capabilities. There’s also the flexibility that customers demand — a hybrid approach that cuts across cloud services and on-premises. With SinglStore vector indexing and JSON handling, the capabilities opened further. 


Preparing for the Shift to Platform Engineering

To effectively support the transition, leaders must commit to a culture of platform engineering. Simply adopting technology isn’t enough. It needs to be backed by a thorough strategy that allows developers to truly benefit from the tools and structures of platform engineering. What does this look like? Success requires leaders and developers to encourage collaboration and break down silos between operations and development teams. It’s possible to build a bridge between developers and operations by committing to cloud migration, creating a centralized platform and investing in collaborative tools and the strategy to back it up. To engage in platform engineering requires dedication to a collaborative culture instigated from the top, empowered by overall strategic decisions and operations. This includes continued learning for developers to stay on top of new languages, trends, challenges and priorities, internally and externally. Teams are more successful when they utilize performance metrics to track workflows that help them conduct effective maintenance and improve on a consistent and ongoing basis.


Data Governance in action: the CDO, the CISO and the Perks of collaboration

Maintaining independent reporting structures for the CDO and CISO, separate from the Chief Information Officer (CIO), is crucial. That’s because when they report directly to the executive leadership or the CEO, they can provide independent updates on data governance and cybersecurity, ensuring clarity and objectivity in decision-making for critical data-related matters. Due to this arrangement, senior management will have a holistic view of risk management, compliance, and strategic decision-making, without any biases that may arise from reporting to the CIO. Biases, in this context, can manifest in several ways. For example, a CIO might prioritise IT initiatives that align with the department’s goals or budget constraints, potentially overlooking or downplaying certain data governance or security concerns. Hence, this hierarchical reporting structure, with the CIO in the middle, can unintentionally filter or influence the information that reaches senior management, which could impact their ability to make well-informed, impartial decisions.


North Korean hackers are targeting software developers and impersonating IT workers

Diamond Sleet was observed using two attack paths: the first consisted in the deployment of ForestTiger backdoor while the second deployed payloads for DLL search-order hijacking attacks. Onyx Sleet used a different attack path: After successfully exploiting the TeamCity vulnerability, the threat actor creates a user account (named krtbgt), runs system discovery commands and finally deploys a proxy tool named HazyLoad to establish persistent connection. “In past operations, Diamond Sleet and other North Korean threat actors have successfully carried out software supply chain attacks by infiltrating build environments,” Microsoft noted. North Korean state-sponsored hackers have been linked to a social engineering campaign targeting software developers through GitHub. By pretending to be a developer or a recruiter, the attacker managed to convince the victim to collaborate on a GitHub repository and ultimately download and execute malware on its device.


Five key questions about disaster recovery as a service

Almost any organisation can use DRaaS because it requires little in the way of hardware or up-front investment. However, its use is most common in organisations that want to minimise downtime, but cannot justify investment in redundant hardware, either on-premise or in a datacentre or colocation facility. This is likely to involve a trade-off between performance and recovery times, and cost. DRaaS that runs in the public cloud will be slower than dedicated systems, but it will still be faster to recover from than basic cloud-based backup or BaaS. Another application for DRaaS is where conventional DR systems are less practical. This includes branch and remote offices that may have lower bandwidth connections and little in the way of on-site IT support. There is also a trend towards use of DRaaS to provide resilience for cloud-based infrastructure. Such cloud-to-cloud disaster recovery can range from replicating entire cloud production environments or specific VMs to a secondary cloud location, to providing additional redundancy and continuity for SaaS applications and even Microsoft 365.


Blue-Green Deployment: Achieving Seamless and Reliable Software Releases

In order to reduce risks and downtime when releasing new versions or updates of an application, blue-green deployment is a software deployment strategy. It entails running two parallel instances of the same production environment, with the “blue” environment serving as a representation of the current stable version and the “green” environment. With this configuration, switching between the two environments can be done without upsetting end users. without disrupting end-users. The fundamental idea behind blue-green deployment is to automatically route user traffic to the blue environment to protect the production system's stability and dependability. Developers and QA teams can validate the new version while the green environment is being set up and thoroughly tested before it is made available to end users. ... The advantages of blue-green deployment are numerous. By maintaining parallel environments, organizations can significantly reduce downtime during deployments. 


Shaping the Future of Hybrid Quantum Algorithms for Drug Discovery

One of the main challenges of drug discovery is simulating the interaction between molecules to, for instance, predict the potency of a drug. Accurately simulating the behavior of a single molecule is tricky since the number of possible interactions with other molecules skyrockets as you increase the overall number of molecules. Computer-aided drug discovery has been around for about 40 years. However, due to limited computational powers, the first software packages had to simplify the physics and depended a lot on experimental validation—which is, to this day, a lot of trial and error. As the computational power of computers increases, and as physics models become more and more complex, we’ll be able to run more accurate simulations that not only spare us a lot of experimental testing but also allow us to develop entirely new drugs. Simplistic models haven’t previously tapped a vast chunk of the chemical search space. Quantum computing is still very early, and quantum computers have yet to demonstrate a practical advantage over supercomputers. 


A technology lawyer suggests how artificial intelligence can benefit every Indian tangibly

As impressive as AI has been so far, we are, at the time of this writing, on the brink of yet another transformation that promises to be even more dramatic. Over the past year or so, remarkable improvements in the capabilities of large language models (LLMs) have hinted at a new form of emergent ‘intelligence’ that can be deployed across a range of applications whose full scale and scope will only become evident over time. So powerful is the potential of this new technology that some of the brightest minds on the planet have called for a pause in its development out of the fear that it will lead to a SkyNet future and the genuine threat of unleashing malicious artificial general intelligence. LLMs are computer algorithms designed to generate coherent and intelligent responses to queries in a humanlike conversational manner. They are built on artificial neural networks that have typically been trained on massive data sets that allow them to learn language structure. LLMs can learn without being explicitly programmed. 


Team Topologies: A Game Changer For Your Data Governance Organization

Managing data is not only a technological task, but also an organizational one. It requires successful coordination and collaboration between different teams and stakeholders. Here, priorities, goals, and perspectives often differ, making it difficult to establish effective work processes and communication structures. Another key aspect is the clear definition of roles – such as the role of a data architect or the role of a master data manager – and their responsibilities in the context of the data organization. Without clear structures, misunderstandings and conflicts can arise, negatively impacting data management efficiency and business processes. Given these challenges, implementing effective data management and data governance practices sometimes seems daunting. However, it is a critical factor in the success of data-driven organizations, and strategies exist to overcome these challenges. One promising strategy is to apply innovative collaboration models and team structures.


Soft Skills Play Significant Role in Success of IT Professionals

A person with strong problem-solving skills typically demonstrates the ability to analyze complex issues systematically, break them down, and identify effective solutions, according to Haggarty. "They showcase critical thinking, resourcefulness, and a willingness to explore alternative approaches," she noted. "Effective problem-solvers are also skilled in evaluating potential consequences and making informed decisions." In addition, their capacity to collaborate with diverse teams also contributes to successful problem-solving in dynamic work environments. In the tech industry, networking facilitates idea exchange and exposure to diverse perspectives. Haggarty said networking is highly ranked due to its potential to foster collaboration, knowledge sharing, and professional growth. "Establishing strong professional relationships can lead to opportunities for collaboration, career advancement, and staying informed about industry trends," she said. "It can also aid with problem-solving by connecting individuals with complementary skills to address multifaceted challenges."



Quote for the day:

''If my mind can conceive it, my heart can believe it, I know I can achieve it.'' -- Jesse Jackson

No comments:

Post a Comment