Showing posts with label DDD. Show all posts
Showing posts with label DDD. Show all posts

Daily Tech Digest - September 14, 2024

Three Critical Factors for a Successful Digital Transformation Strategy

Just as important as the front-end experience are the back-end operations that keep and build the customer relationship. Value-added digital services that deliver back-end operational excellence can improve the customer experience through better customer service, improved security and more. Emerging tech like artificial intelligence can substantially improve how companies get a clearer view into their operations and customer base. Take data flow and management, for example. Many executives report they are swimming in information, yet around half admit they struggle analyzing it, according to research by Paynearme. While data is important, the insights derived from that data are key to the conclusions executives must draw. Maintaining a digital record of customer information, transaction history, spend behaviors and other metrics and applying AI to analyze and inform decisions can help companies provide better service and protect their end users. They can streamline customer service, for instance, by immediately sourcing relevant information and delivering a resolution in near-real time, or by automating the analysis of spend behavior and location data to shut down potential fraudsters.


AI reshaping the management of remote workforce

In a remote work setting, one of the biggest challenges for organizations remains in streamlining of operations. For a scattered team, the implementation of AI emerges as a revolutionary tool in automating shift and rostering using historical pattern analytics. Historical data on staff availability, productivity, and work patterns enable organizations to optimise schedules and strike a perfect balance between operational needs and employee preferences. Subsequently, this reduces conflicts and enhances overall work efficiency. Apart from this, AI analyses staff work duration and shifts that further enable organizations to predict staffing needs and optimise resource allocation. This enhances capacity modelling to ensure the right team member is available to handle tasks during peak times, preventing overstaffing or understaffing issues. ... With expanding use cases, AI-powered facial recognition technology has become a critical part of identity verification and promoting security in remote work settings. Organisations need to ensure security and confidentiality at all stages of their work. In tandem, AI-powered facial recognition ensures that only authorized personnel have access to the company’s sensitive systems and data. 


The DPDP act: Navigating digital compliance under India’s new regulatory landscape

Adapting to the DPDPA will require tailored approaches, as different sectors face unique challenges based on their data handling practices, customer bases, and geographical scope. However, some fundamental strategies can help businesses effectively navigate this new regulatory landscape. First, conducting a comprehensive data audit is essential. Businesses need to understand what data they collect, where it is stored, and who has access to it. Mapping out data flows allows organizations to identify risks and address them proactively, laying the groundwork for robust compliance. Appointing a Data Protection Officer (DPO) is another critical step. The DPO will be responsible for overseeing compliance efforts, serving as the primary point of contact for regulatory bodies, and handling data subject requests. While it’s not yet established whether it’s mandatory or not, it is safe to say that this role is vital for embedding a culture of data privacy within the organisation. Technology can also play a significant role in ensuring compliance. Tools such as Unified Endpoint Management (UEM) solutions, encryption technologies, and data loss prevention (DLP) systems can help businesses monitor data flows, detect anomalies, and prevent unauthorized access. 


10 Things To Avoid in Domain-Driven Design (DDD)

To prevent potential issues, it is your responsibility to maintain a domain model that is uncomplicated and accurately reflects the domain. This diligent approach is important to focus on modeling the components of the domain that offer strategic importance and to streamline or exclude less critical elements. Remember, Domain-Driven Design (DDD) is primarily concerned with strategic design and not with needlessly complexifying the domain model with unnecessary intricacies. ... It's crucial to leverage Domain-Driven Design (DDD) to deeply analyze and concentrate on the domain's most vital and influential parts. Identify the aspects that deliver the highest value to the business and ensure that your modeling efforts are closely aligned with the business's overarching priorities and strategic objectives. Actively collaborating with key business stakeholders is essential to gain a comprehensive understanding of what holds the greatest value to them and subsequently prioritize these areas in your modeling endeavors. This approach will optimally reflect the business's critical needs and contribute to the successful realization of strategic goals.


How to Build a Data Governance Program in 90 Days

With a new data-friendly CIO at the helm, Hidalgo was able to assemble the right team for the job and, at the same time, create an environment of maximum engagement with data culture. She assembled discussion teams and even a data book club that read and reviewed the latest data governance literature. In turn, that team assembled its own data governance website as a platform not just for sharing ideas but also to spread the momentum. “We kept the juices flowing, kept the excitement,” Hidalgo recalled. “And then with our data governance office and steering committee, we engaged with all departments, we have people from HR, compliance, legal product, everywhere – to make sure that everyone is represented.” ... After choosing a technology platform in May, Hidalgo began the most arduous part of the process: preparation for a “jumpstart” campaign that would kick off in July. Hidalgo and her team began to catalog existing data one subset of data at a time – 20 KPIs or so – and complete its business glossary terms. Most importantly, Hidalgo had all along been building bridges between Shaw’s IT team, data governance crew, and business leadership to the degree that when the jumpstart was completed – on time – the entire business saw the immense value-add of the data governance that had been built.


Varied Cognitive Training Boosts Learning and Memory

The researchers observed that varied practice, not repetition, primed older adults to learn a new working memory task. Their findings, which appear in the journal Intelligence, propose diverse cognitive training as a promising whetstone for maintaining mental sharpness as we age. “People often think that the best way to get better at something is to simply practice it over and over again, but robust skill learning is actually supported by variation in practice,” said lead investigator Elizabeth A. L. Stine-Morrow ... The researchers narrowed their focus to working memory, or the cognitive ability to hold one thing in mind while doing something else. “We chose working memory because it is a core ability needed to engage with reality and construct knowledge,” Stine-Morrow said. “It underpins language comprehension, reasoning, problem-solving and many sorts of everyday cognition.” Because working memory often declines with aging, Stine-Morrow and her colleagues recruited 90 Champaign-Urbana locals aged 60-87. At the beginning and end of the study, researchers assessed the participants’ working memory by measuring each person’s reading span: their capacity to remember information while reading something unrelated.


Why Cloud Migrations Fail

One stumbling block on the cloud journey is misunderstanding or confusion around the shared responsibility model. This framework delineates the security obligations of cloud service providers, or CSPs, and customers. The model necessitates a clear understanding of end-user obligations and highlights the need for collaboration and diligence. Broad assumptions about the level of security oversight provided by the CSP can lead to security/data breaches that the U.S. National Security Agency (NSA) notes “likely occur more frequently than reported.” It’s also worth noting that 82% of breaches in 2023 involved cloud data. The confusion is often magnified in cases of a cloud “lift-and-shift,” a method where business-as-usual operations, architectures and practices are simply pushed into the cloud without adaptation to their new environment. In these cases, organizations may be slow to implement proper procedures, monitoring and personnel to match the security limitations of their new cloud environment. While the level of embedded security can differ depending on the selected cloud model, the customer must often enact strict security and identity and access management (IAM) controls to secure their environment.


AI - peril or promise?

The interplay between AI data centers and resource usage necessitates innovative approaches to mitigate environmental impacts. Advances in cooling technology, such as liquid immersion cooling and the use of recycled water, offer potential solutions. Furthermore, utilizing recycled or non-potable water for cooling can alleviate the pressure on freshwater resources. Moreover, AI itself can be leveraged to enhance the efficiency of data centers. AI algorithms can optimize energy use by predicting cooling needs, managing workloads more efficiently, and reducing idle times for servers. Predictive maintenance powered by AI can also prevent equipment failures, thereby reducing the need for excessive cooling. This is good news as the sector continues to use AI to benefit from greater efficiencies, cost savings, driving improvements in services with the expected impact of AI on the operational side for data centres expected to be very positive. Over 65 percent of our survey respondents reported that their organizations are regularly using generative AI, nearly double the percentage from their 2023 survey and around 90 percent of respondents expect their data centers to be more efficient as a direct result of AI applications.


HP Chief Architect Recalibrates Expectations Of Practical Quantum Computing’s Arrival From Generations To Within A Decade

Hewlett Packard Labs is now adopting a holistic co-design approach, partnering with other organizations developing various qubits and quantum software. The aim is to simulate quantum systems to solve real-world problems in solid-state physics, exotic condensed matter physics, quantum chemistry, and industrial applications. “What is it like to actually deliver the optimization we’ve been promised with quantum for quite some time, and achieve that on an industrial scale?” Bresniker posed. “That’s really what we’ve been devoting ourselves to—beginning to answer those questions of where and when quantum can make a real impact.” One of the initial challenges the team tackled was modeling benzine, an exotic chemical derived from the benzene ring. “When we initially tackled this problem with our co-design partners, the solution required 100 million qubits for 5,000 years—that’s a lot of time and qubits,” Bresniker told Frontier Enterprise. Considering current quantum capabilities are in the tens or hundreds of qubits, this was an impractical solution. By employing error correction codes and simulation methodologies, the team significantly reduced the computational requirements.


New AI reporting regulations

At its core, the new proposal requires developers and cloud service providers to fulfill reporting requirements aimed at ensuring the safety and cybersecurity resilience of AI technologies. This necessitates the disclosure of detailed information about AI models and the platforms on which they operate. One of the proposal’s key components is cybersecurity. Enterprises must now demonstrate robust security protocols and engage in what’s known as “red-teaming”—simulated attacks designed to identify and address vulnerabilities. This practice is rooted in longstanding cybersecurity practices, but it does introduce new layers of complexity and cost for cloud users. Based on the negative impact of red-teaming on enterprises, I suspect it may be challenged in the courts. The regulation does increase focus on security testing and compliance. The objective is to ensure that AI systems can withstand cyberthreats and protect data. However, this is not cheap. Achieving this result requires investments in advanced security tools and expertise, typically stretching budgets and resources. My “back of the napkin” calculations figure about 10% of the system’s total cost.



Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - February 20, 2023

How quantum computing threatens internet security

“Basically, the problem with our current security paradigm is that it relies on encrypted information and decryption keys that are sent over a network from sender to receiver. Regardless of the way the messages are encrypted, in theory, someone can intercept and use the keys to decrypt apparently secure messages. Quantum computers simply make this process faster,” Tanaka explains. “If we dispense with this key-sharing idea and instead find a way to use unpredictable random numbers to encrypt information, the system might be immune. [Muons] are capable of generating truly unpredictable numbers.” The proposed system is based on the fact that the speed of arrival of these subatomic particles is always random. This would be the key to encrypt and decrypt the message, if there is a synchronized sender and receiver. In this way, the sending of keys would be avoided, according to the Japanese team. However, muon detection devices are large, complex and power-hungry, limitations that Tanaka believes the technology could ultimately overcome.


Considering Entrepreneurship After a Successful Corporate Career?

Here Are 3 Things You Need to Know.Many of you may be concerned that a transition could alienate your audience and force you to wait before making a move. But this is a common misconception rooted in the idea that your personal brand reflects what you do professionally. At Brand of a Leader, we help our clients shift their thinking by showing them that their personal brand is who they are, not what they do. The goal of personal brand discovery is to understand your essence and package it in a way that appeals to others. Your vocation is only one of your key talking points, and when you pivot, you simply shift those points while maintaining the essence of your brand. So, when should you start building your personal brand? The answer is simple: the sooner, the better. Building a brand takes time — time to build an audience, create visibility and establish associations between your name and consistent perceptions in people's minds. Starting sooner means you'll start seeing results faster.


Establish secure routes and TLS termination with wildcard certificates

By default, the Red Hat OpenShift Container Platform uses the Ingress Operator to create an internal certificate authority (CA) and issue a wildcard certificate valid for applications under the .apps subdomain. The web console and the command-line interface (CLI) use this certificate. You can replace the default wildcard certificate with one issued by a public CA included in the CA bundle provided by the container userspace. This approach allows external clients to connect to applications running under the .apps subdomain securely. You can replace the default ingress certificate for all applications under the .apps subdomain. After replacing the certificate, all applications, including the web console and CLI, will be encrypted using the specified certificate. One clear benefit of using a wildcard certificate is that it minimizes the effort of managing and securing multiple subdomains. However, this convenience comes at the cost of sharing the same private key across all managed subdomains.


Overcoming a cyber “gut punch”: An interview with Jamil Farshchi

Your biggest enemies in a breach are time and perfection. Everyone wants everything done in a split second. And having perfect information to construct perfect solutions and make perfect decisions is impossible. Time and perfection will ultimately crush you. By contrast, your two greatest allies are communication and optionality. Communication is being able to lay out the story of where things are, and to make sure everyone is rowing in the same direction. It’s being able to communicate the current status, and your plans, to regulators—and at the same time being able to reassure your customers and make sure they have confidence that you’re going to be able to navigate to the other side. Optionality is critical, because no one makes perfect decisions in this kind of firefight. Unless you’re comfortable making decisions that might not be right at any given point in time, you’re going to fail. [As a leader,] you need to frame up a program and the decisions you’re making in such a way that you’re comfortable rolling them back or tailoring them as you learn more, and as things progress.


7 reasons to avoid investing in cyber insurance

Two things organizations might want to consider right off the bat when contemplating an insurance policy are the cost to and benefit for the business, SecAlliance Director of Intelligence Mick Reynolds tells CSO. “When looking at cost, the recent spate of ransomware attacks globally has seen massive increases in premiums for firms wishing to include coverage of such events. Renewal quotes have, in some cases, increased from around £100,000 ($120,000) to over £1.5 million ($1.8 million). Such massive increases in premiums, for no perceived increase in coverage, are starting now to be challenged by board risk committees as to the overall value they provide, with some now deciding that accepting exposure to major cyber events such as ransomware is preferable to the cost of the associated policy.” As for benefits to the business, insurance is primarily taken out to cover losses incurred during a major cyber event, and 99% of the time these losses are quantifiable and relate predominantly to response and recovery costs, Reynolds says.


The importance of plugging insurance cyber response gaps

The insurance industry is a lucrative target as organisations hold large amounts of private and sensitive information about their policy holders who, rightfully so, have the expectation of their data being kept safe and secure. This makes it no surprise that the industry is a key target for cyber criminals due to the massive disruption it can cause and the potential high financial reward on offer. Research shows that 82 per cent of the largest insurance carriers were the focus of ransom attacks in 2022. It is expected that the insurance industry will only become a more favourable target, and these types of disruptions will become increasingly severe. The insurance industry is one that has embraced innovation and new forms of technology in its practices over recent years in order to offer their customers a seamless experience. In doing so, alongside the onset of remote working catalysed by the pandemic, they have increased their threat surface. ... These are just the tip of the iceberg, so when cyber criminals look to exploit data, the insurance industry is a primary target due its huge customer base.


Value Chain Analysis: Best Practices for Improvements

To stay competitive, organizations must ensure that they have picked the right partners for each of the functions in the value chain, and that appropriate value is captured by each participant. “In addition to ensuring each participant’s value and usefulness in the chain, value chain analysis enables organizations to periodically verify that functions are still necessary, and that value is being delivered efficiently without undue waste such as administrative burden, communications costs or transit or other ancillary functions,” he says. Business leaders and IT leaders like the chief information officer and chief data officer must prove that they are benefiting the bottom line. While it is time consuming, value chain analysis is a key method to examine company value -- an essential practice during times of high stakes and economic uncertainty. Jon Aniano, senior vice president, Zendesk, adds running a full VCA requires analyzing and tracking a massive amount of data across your entire company.


Cybersecurity takes a leap forward with AI tools and techniques

“An effective AI agent for cybersecurity needs to sense, perceive, act and adapt, based on the information it can gather and on the results of decisions that it enacts,” said Samrat Chatterjee, a data scientist who presented the team’s work. “Deep reinforcement learning holds great potential in this space, where the number of system states and action choices can be large.” DRL, which combines reinforcement learning and deep learning, is especially adept in situations where a series of decisions in a complex environment need to be made. Good decisions leading to desirable results are reinforced with a positive reward (expressed as a numeric value); bad choices leading to undesirable outcomes are discouraged via a negative cost. It’s similar to how people learn many tasks. A child who does their chores might receive positive reinforcement with a desired playdate; a child who doesn’t do their work gets negative reinforcement, like the takeaway of a digital device.


9 ways ChatGPT will help CIOs

“ChatGPT is very powerful out of the box, so it doesn’t require extensive training or teaching to get up to speed and handle specific business processes. A valuable initial business application for ChatGPT should be directed towards routine tasks, such as filling out a contract. It can effectively review the document and answer the necessary fields using the data and context provided by the organization. With that said, ChatGPT has the potential to shoulder administrative burdens for CIOs quickly, but it’s important to regularly measure the accuracy of its work, especially if an organization plans to use it regularly. The best way for CIOs to get started with ChatGPT is to take the time to grasp how it would work within the context of their organization before rushing to widespread adoption. At these early stages of the technology, it’s better to let it complement existing workflows under close supervision instead of restructuring around it as an end-to-end solution. 


Art Of Knowledge Crunching In Domain Driven Design

Miscommunication during knowledge crunching sessions would have different reasons, such as cognitive bias, which is a type of error in reasoning, decision-making, and perception that occurs due to the way our brains perceive and process information. This type of bias occurs when an individual’s cognitive processes lead them to form inaccurate conclusions or make irrational decisions. For example, when betting on a roulette table, if previous outcomes have landed on red, then we might mistakenly assume that the next outcome will be black; however, these events are independent of each other (i.e., the probability of their results do not affect each other). Also, apophenia is the tendency to perceive meaningful connections between unrelated things, such as conspiracy theories or the moment we think we get it but actually, we do not get it. A good example of this could be an image sent from Mars that includes a shape on a rock that you might think is the face of an alien, but it’s just a random shape of a rock.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - September 02, 2021

Cyber Security In Cars

ISO/SAE 21434, Road vehicles – Cybersecurity engineering, addresses the cybersecurity perspective in engineering of electrical and electronic (E/E) systems within road vehicles. It will help manufacturers keep abreast of changing technologies and cyber-attack methods, and defines the vocabulary, objectives, requirements and guidelines related to cybersecurity engineering for a common understanding throughout the supply chain. The standard, developed in collaboration with SAE International, a global association of engineers and a key ISO partner, draws on the recommendations detailed in SAE J3061, Cybersecurity guidebook for cyber-physical vehicle systems, offering more comprehensive guidance and the input of experts all around the world. Dr Gido Scharfenberger-Fabian, Convenor of the group of ISO experts that developed the standard, said it will enable organizations to define cybersecurity policies and processes, manage cybersecurity risk and foster a cybersecurity culture. “ISO/SAE 21434 will help consider cybersecurity issues at every stage of the development process and in the field, increasing the vehicle’s own cybersecurity defences and mitigating the risk of potential vulnerabilities for every component,” he said.


Ultimate Guide to Becoming a DevOps Engineer

The job title DevOps Engineer is thrown around a lot and it means different things to different people. Some people claim that the title DevOps Engineer shouldn’t exist, because DevOps is ‘a culture’ or ‘a way of working’—not a role. The same people would argue that creating an additional silo defeats the purpose of overlapping responsibilities and having different teams working together. These arguments are not wrong. In fact, some companies that understand and do DevOps engineering very well don’t even have a role with that name (like Google!). The truth is that whenever you see DevOps Engineer jobs advertised, the ad might actually be for an infrastructure engineer, a systems reliability engineer (SRE), a CI/CD engineer, a sysadmin, etc. So the definition for DevOps engineer is rather broad. One thing that’s certain though is to be a DevOps engineer, you must have a solid understanding of the DevOps culture and practices and you should be able to bridge any communication gaps between teams in order to achieve software delivery velocity. 


WhatsApp fined a record 225 mln euro by Ireland over privacy

A WhatsApp spokesperson said in a statement the issues in question related to policies in place in 2018 and the company had provided comprehensive information. "We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate," the spokesperson said. EU privacy watchdog the European Data Protection Board said it had given several pointers to the Irish agency in July to address criticism from its peers for taking too long to decide in cases involving tech giants and for not fining them enough for any breaches. It said a WhatsApp fine should take into account Facebook's turnover and that the company should be given three months instead of six months to comply. Europe's landmark privacy rules, known as GDPR, are finally showing some teeth even if the lead regulator for some tech giants appears otherwise, said Ulrich Kelber, Germany's federal commissioner for data protection and freedom of information. "What is important now is that the many other open cases on WhatsApp in Ireland are finally decided on so that we can take faster and longer strides towards the uniform enforcement of data protection law in Europe," he told Reuters.


DevOps, Low-Code and RPA: Pros and Cons

RPA programs enable companies to automate repetitive tasks by creating software scripts using a recorder. For those of us who remember using the macro recorder in Microsoft Excel, it’s a similar concept. Once the script is created, users can then use a visual editor to modify, reorder and edit its steps. Speaking to the growing popularity of these solutions was the UiPath IPO on April 21, 2021, which ended up being one of the largest software IOPs in history. The use cases for RPA programs are unlimited—any repetitive task done via a UI is a candidate. RPA is an area where we’ve seen an intersection of business-user designed apps (UiPath and Blue Prism) with more traditional DevOps tools specifically in the test automation space (Tricentis, Worksoft, and Egglplant) and new conversational-based solutions like Krista. In the case of test automation, a lightweight recorder is given to a business user who can then record a business process. The recording is then fed to the automation team, which creates a hardened test case that in turn is fed into a CI/CD system.


IBM quantum computing: From healthcare to automotive to energy, real use cases are in play

Quantum computers are better at that than classical computers, Utz said. Anthem is running different models on IBM's quantum cloud. Right now, company officials are building a roadmap around how Anthem wants to deliver its platform using quantum technology, so "I can't say quantum is ready for primetime yet," Utz said. "The plan is to get there over the next year or so and have something working in production." A good place to start with anomaly detection is in finding fraud, he said. "Classical computers will tap out at some point and can't get to the same place as quantum computers." Other use cases are around longitudinal population health modeling, meaning that as Anthem looks at providing more of a digital platform for health, one of the challenges is that there is "almost an infinite number of relationships," he said. This includes different health conditions, providers patients see, outcomes and figuring out where there are outliers, he said. "There's only so much a classical system can do there, so we're looking for more opportunities to improve healthcare for our members and the population at large," and the ability to proactively predict risk, Utz said. 


How to Implement Domain-Driven Design (DDD) in Golang

Domain-Driven Design is a way of structuring and modeling the software after the Domain it belongs to. What this means is that a domain first has to be considered for the software that is written. The domain is the topic or problem that the software intends to work on. The software should be written to reflect the domain. DDD advocates that the engineering team has to meet up with the Subject Matter Experts, SME, which are the experts inside the domain. The reason for this is because the SME holds the knowledge about the domain and that knowledge should be reflected in the software. It makes pretty much sense when you think about it, If I were to build a stock trading platform, do I as an engineer know the domain well enough to build a good stock trading platform? The platform would probably be a lot better off if I had a few sessions with Warren Buffet about the domain The architecture in the code should also reflect on the domain.

 

China’s Personal Information Protection Law and Its Global Impact

The law’s restrictions on cross-border data transfers may not affect retailers that operate domestically, and hence have no need to transfer information abroad. However, the story is vastly different for two types of companies: those in possession of large amount of personal information and those in possession of information on critical infrastructure. Moreover, PIPL declares that the authority of domestic regulators supersedes that of international treaties. PIPL will help foreign companies operating in China without cross-border data transfers to develop privacy policies in compliance with the law. Before PIPL, the lack of a domestic PI protection law led to the broad adoption of the EU’s GDPR as a privacy policy among foreign companies. However, the GDPR’s decision-making is based on agreements among EU member states, which does not apply in the case of China. Since PIPL will come into effect in November 2021, foreign firms in China will need to revise their privacy policies to fit the requirements of the new law.


10 Characteristics of an AI-Powered Enterprise

Digital transformation makes the inclusion of AI as part of the business strategy even more important than it would be otherwise because digital organizations are software companies. Since commercial applications and tools are increasingly taking advantage of AI, the logical development by extension is AI embedded in enterprise-built applications. After all, businesses are moving more data and compute to the cloud and their new applications are being designed as cloud-first applications. Of course, AI and machine learning tooling is also available in the cloud, so developers have what they need to build “intelligent” applications. AI and machine learning don't just work, however. They require testing and monitoring. “Losing trust in AI-infused applications is a high risk for AI-based innovation,” said Diego Lo Giudice, VP and principal analyst at Forrester, in a blog post. “Forrester Analytics data shows that 73% of enterprises claim to be adopting AI for building new solutions in 2021, up from 68% in 2020, and testing those AI-infused applications becomes even more critical.” Trust and safety are things that need to be proven through testing.


Why Rust is the best language for IoT development

Internet of Things (IoT) technology is rapidly terraforming the landscape of modern society right in front of our very eyes, and propelling us all into the future. It does this by providing solutions to everything from tracking your daily personal fitness goals with an Apple watch, to completely revolutionising the entire transport sector. These devices connect to each other and form the great network required for something like a digital twin; they are constantly collating data in real time from the surrounding environment which means that the system is always using entirely current information. As amazing and powerful as this technology is, it is slightly held back by the fact that, by their very nature, IoT devices have far less processing power than your average piece of equipment. This requires a much more efficient code to be written to fully take advantage of its raw potential without affecting the device’s performance. This is where Rust comes into the picture as one of the very few languages that can provide a faster runtime for IoT technology.


Are Tesla’s Dojo supercomputer claims valid?

The D1, according to Tesla, features 362teraFLOPS of processing power. This means it can perform 362 trillion floating-point operations per second (FLOPS), Tesla says. Now imagine harnessing the processing power of 25 D1 chips into a training tile, and then linking together 120 training tiles through multiple servers. That’s what Tesla is doing with the Dojo supercomputer for its autonomous cars. And with each training tile containing 9PFLOPS of computing power, Dojo has (by my possibly inaccurate calculations) 1.08 exaFLOPS of power under its hood (Tesla calls it 1.1EFLOPS). That kind of horsepower would make Dojo more than twice as fast as the currently acknowledged fastest supercomputer in the world, Fugaku. Built by Fujitsu, this supercomputer reaches speeds of 442PFLOPS. Supercomputers already are being used to accelerate medical research and drug development because they are capable of quickly processing massive amounts of data. Indeed, researchers have relied on supercomputers to power COVID-19 research since the pandemic began in early 2020.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing." -- Reed Markham