Showing posts with label super computing. Show all posts
Showing posts with label super computing. Show all posts

Daily Tech Digest - September 01, 2022

Cloud Applications Are The Major Catalysts For Cyber Attachks

Those cybersecurity threats have sky-high substantially in recent because criminals have built lucrative businesses from stealing data and nation-states have come to see cybercrime as an opportunity to acquire information, influence, and advantage over their rivals. This has made a path for potential catastrophic attacks such as the WannaCrypt ransomware campaign which was being displayed in recent headlines. This evolving threat landscape has begun to change the way customers view the cloud. “It was only a few years ago when most of my customer conversations started with, ‘I can’t go to the cloud because of security. It’s not possible,’” said Julia White, Microsoft’s corporate vice president for Azure and security. “And now I have people, more often than not, saying, ‘I need to go to the cloud because of security.’” It’s not an exaggeration to say that cloud computing is completely changing our society. It’s ending major industries such as the retail sector, enabling the type of mathematical computation that is uplifting an artificial intelligence revolution and even having a profound impact on how we communicate with friends, family, and colleagues.


Intel AI chief Wei Li: Someone has to bring today's AI supercomputing to the masses

As is often the case in technology, everything old is new again. Suddenly, says Li, everything in deep learning is coming back to the innovations of compilers back in the day. "Compilers had become irrelevant" in recent years, he said, an area of computer science viewed as largely settled. "But because of deep learning, the compiler is coming back," he said. "We are in the middle of that transition." In his his PhD dissertation at Cornell, Li developed a computer framework for processing code in very large systems with what are called "non-uniform memory access," or NUMA. His program refashioned code loops for the most amount of parallel processing possible. But it also did something else particularly important: it decided which code should run depending on which memories the code needed to access at any given time. Today, says Li, deep learning is approaching the point where those same problems dominate. Deep learning's potential is mostly gated not by how many matrix multiplications can be computed but by how efficiently the program can access memory and bandwidth.


Event Streaming and Event Sourcing: The Key Differences

Event streaming employs the pub-sub approach to enable more accessible communication between systems. In the pub-sub architectural pattern, consumers subscribe to a topic or event, and producers post to these topics for consumers’ consumption. The pub-sub design decouples the publisher and subscriber systems, making it easier to scale each system individually. The publisher and subscriber systems communicate through a message broker like Apache Pulsar. When a state changes or an event occurs, the producer sends the data (data sources include web apps, social media and IoT devices) to the broker, after which the broker relates the event to the subscriber, who then consumes the event. Event streaming involves the continuous flow of data from sources like applications, databases, sensors and IoT devices. Event streams employ stream processing, in which data undergoes processing and analysis during generation. This quick processing translates to faster results, which is valuable for businesses with a limited time window for taking action, as with any real-time application.


Big cloud rivals hit back over Microsoft licensing changes

In a nutshell, the changes that come into effect from October allow customers with Software Assurance or subscription licenses to use these existing licenses "to install software on any outsourcers' infrastructure" of their choice. But as The Register noted at the time, this specifically excludes "Listed Providers", a group that just happens to include Microsoft's biggest cloud rivals – AWS, Google and Alibaba – as well as Microsoft's own Azure cloud, in a bid to steer customers to Microsoft's partner network. ... These criticisms are not entirely new, and some in the cloud sector made similar points following Microsoft's disclosure of some of the licensing changes it intended to make back in May. One cloud operator who requested anonymity told The Register in June that Redmond's proposed changes fail to "move the needle" and ignore the company's "other problematic practices." Another AWS exec, Matt Garman, posted on LinkedIn in July that Microsoft's proposed changes did not represent fair licensing practice and were not what customers wanted.


Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm

Built on 16nm technology, the MLSoC’s processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces, and system management — all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capacity, making it ideal as a standalone edge-based system controller, or to add an ML-offload accelerator for processors, ASICs and other devices. The software-first approach includes carefully-defined intermediate representations (including the TVM Relay IR), along with novel compiler-optimization techniques. ... Many ML startups are focused on building only pure ML accelerators and not an SoC that has a computer-vision processor, applications processors, CODECs, and external memory interfaces that enable the MLSoC to be used as a stand-alone solution not needing to connect to a host processor. Other solutions usually lack network flexibility, performance per watt, and push-button efficiency – all of which are required to make ML effortless for the embedded edge.


Why CIOs Need to Be Even More Dominant in the C-Suite Right Now

“Now more than ever, we’re seeing a pressing demand for CIOs to deliver digital transformation that enables business growth to energize the top line or optimize operations to eliminate cost and help the bottom line,” says Savio Lobo, CIO of Ensono. This requires the CIO to have a deep understanding of the business and surface decisions that may influence these objectives. Large-scale digital solutions and capabilities, however, often cannot be implemented simultaneously, especially when they require significant change in how customers and staff engage with people and processes. This means ruthless prioritization decisions may need to be made that include what is moving forward at any given time and equally importantly, what is not. “While executing a large initiative, there will also be people, process and technology choices to be made and these need to be made in a timely manner,” Lobo adds. This may look unique for every organization but should include collaboration on the discovery and implementation and an open feedback loop for how systems and processes are working or not working in each stakeholder’s favor. 


Ensuring security of data systems in the wake of rogue AI

A ‘Trusted Computing’ model, like the one developed by the Trusted Computing Group (TCG), can be easily applied to all four of these AI elements in order to fully secure a rogue AI. Considering the data set element of an AI, a Trusted Platform Module (TPM) can be used to sign and verify that data has come from a trusted source. A hardware Root of Trust, such as the Device Identifier Composition Engine (DICE), can make sure that sensors and other connected devices maintain high levels of integrity and continue to provide accurate data. Boot layers within a system each receive a DICE secret, which combines the preceding secret on the previous layer with the measurement of the current one. This ensures that when there is a successful exploit, the exposed layer’s measurements and secrets will be different, securing data and protecting itself from any data disclosure. DICE also automatically re-keys the device if a flaw is unearthed within the device firmware. The strong attestation offered by the hardware makes it a great tool to discover any vulnerabilities in any required updates.


The Implication of Feedback Loops for Value Streams

The practical implication for software engineering management is to first address feedback loops that generate a lot of bugs/issues to get your capacity back. For example, if you have a fragile architecture or code of low maintainability that requires a lot of rework after any new change implementation, it is obvious that refactoring is necessary to regain engineering productivity; otherwise, engineering team capacity will be low. The last observation is that the lead time will depend on the simulation duration, the longer you run the value stream, the higher the number of lead times variants you will get. Such behavior is the direct implication of the value stream structure with the redo feedback loop and its probability distribution between the output queue and the redo queue. If you are an engineering manager who inherited legacy code with significant accumulated debt, it might be reasonable to consider incremental solution rewriting. Otherwise, the speed of delivery will be very slow forever, not only for the modernization time. The art of simplicity; greater complexity yields more variations which increase the probability of results occurring outside of acceptable parameters. 


Beat these common edge computing challenges

Realizing the benefits of edge computing depends on a thoughtful strategy and careful evaluation of your use cases, in part to ensure that the upside will dwarf the natural complexity of edge environments. “CIOs shouldn’t adopt or force edge computing just because it’s the trendy thing – there are real problems that it’s intended to solve, and not all scenarios have those problems,” says Jeremy Linden, senior director of product management at Asimily. Part of the intrinsic challenge here is that one of edge computing’s biggest problem-solution fits – latency – has sweeping appeal. Not many IT leaders are pining for slower applications. But that doesn’t mean it’s a good idea (or even feasible) to move everything out of your datacenter or cloud to the edge. “So for example, an autonomous car may have some of the workload in the cloud, but it inherently needs to react to events very quickly (to avoid danger) and do so in situations where internet connectivity may not be available,” Linden says. “This is a scenario where edge computing makes sense.” In Linden’s own work – Asimily does IoT security for healthcare and medical devices – optimizing the cost-benefit evaluation requires a granular look at workloads.


Tenable CEO on What's New in Cyber Exposure Management

Tenable wants to provide customers with more context around what threat actors are exploiting in the wild to both refine and leverage the analytics capabilities the company has honed, Yoran says. Tenable must have context around what's mission-critical in a customer's organization to help clients truly understand their risk and exposure rather than just add to their cyber noise, he adds. Tenable has spent more on vulnerability management-focused R&D over the past half-decade than its two closest competitors combined, which has allowed the firm to deliver differentiated capabilities, Yoran says. Unlike competitors who have expanded their offerings to include everything from logging and SIEM to EDR and managed security services, Yoran says Tenable has remained laser-focused on risk. "The three primary vulnerability management vendors have three very different strategies and they've been on divergent paths for a long time," Yoran says. "For us, the key to success has been and will continue to be that focus on helping people assess and understand risk."



Quote for the day:

"Get your facts first, then you can distort them as you please." -- Mark Twain

Daily Tech Digest - February 06, 2022

Technical Debt and Modular Software Architecture

An organization incurs technical debt whenever it cedes its rights and perquisites as a customer to a cloud service provider. To get a feel for how this works in practice, consider the case of a hypothetical SaaS cloud subscriber. The subscriber incurs technical debt when it customizes the software or redesigns its core IT and business processes to take advantage of features or functions that are specific to the cloud provider’s platform (for example, Salesforce’s, Marketo’s, Oracle’s, etc.). This is fine for people who work in sales and marketing, as well as for analysts who focus on sales and marketing. But what about everybody else? Can the organization make its SaaS data available to high-level decision-makers and to the other interested consumers dispersed across its core business function areas? Can it contextualize this SaaS data with data generated across its core function areas? Is the organization taking steps to preserve historical SaaS data. In short: What is the opportunity cost of the SaaS model and its convenience? What steps must the organization take to offset this opportunity cost?


Data Breaches Affected Nearly 6 Billion Accounts in 2021

Breaches grew rapidly in 2021, noted Lucas Budman, founder and CEO of TruU, a multifactor authentication company in Palo Alto, Calif. “We exceeded the number of breach events in 2020 by the third quarter of 2021,” he told TechNewsWorld. A number of factors have been contributing to that increase, he added. “The ever-increasing sophistication of threat actors, a greater number of connected IoT devices, and the protracted shortage of skilled security talent all play a role in increased breach activity,” he said. Budman also maintained that Covid-19 has contributed to growing data breach numbers. “Data shows that the surge in remote and hybrid work and other factors resulting from the Covid-19 pandemic have fueled the rise of cybercrime by 600 percent or more,” he said. ... “Since an exceedingly large percentage of attacks focus on the end-user, this move to remote has proven very fruitful for attackers,” he told TechNewsWorld. “Similarly,” he continued, “the pandemic has dramatically changed the way goods and services are manufactured, dispatched and consumed. ...”


Council Post: How to develop a comprehensive AI governance & ethics function

Biases including cognitive bias, incomplete data, flaws in the algorithm, etc, slow down the growth of AI in an organisation. Research and development play an important role in addressing these issues. Who understands this better than ethicists, social scientists, and experts? Therefore, businesses should include such experts in their AI projects across applications. Data architects also play a key role in governing AI products. Companies should have a complete pipeline of data or metadata for AI modelling. Remember, AI’s success depends on a well-sorted data architecture that is error and noise-free. To do so, data standardisation, data governance, and business analytics are a must. HR plays a key role in shaping the AI governance function. For instance, they should find candidates who “fit” into the organisation’s existing AI framework and create training material for the existing workforce to help them understand how to create ethical AI applications. Ensuring AI products don’t cross any legal boundaries is critical for smooth deployment. AI solutions meet the stipulated compliance guidelines of the organisation and the industry in which the organisation operates.


Importance of Binding Business and System Architecture

An architecture of the enterprise is a carefully designed structure of a business or company entrepreneurial economic activity. One can easily assert that these entrepreneurial or economic activities include people, processes, and systems working in harmony to yield important business outcomes. These structures include organizational design, operational processes producing value, and the systems used by people during the execution of their mission. Enterprise architects use the business-prescribed operational end-state (results of value) to guide (like a blueprint) the enterprise to accomplish its mission—frequently, the end-states include vision, goals, objectives, and capabilities. Can a business exchange goods and services without technology and survive? Of course not. ... The enterprise architecture is neither the business architecture (operational viewpoint) nor the system architecture (technical viewpoint)—rather, the enterprise architecture is both architectures created in an integrated form, using a standardized method of design, and usable and consumable by both operational and technology people.


Cloud Native Winners and Losers

Enterprises that don’t plan ahead to move an application off a specific cloud but are forced to do so at some future point will also become losers. There is a lot of cost and risk involved in modifying applications to remove specific cloud native services and replace them with other cloud native services or open services. Clearly, this is the dreaded “vendor lock in.” Most applications that move to cloud platforms won’t ever move off that platform during the life of the application, mostly due to the costs and risks involved. Another drawback is that you’ll need cloud specific skills to take full advantage of cloud native features. This talent may not be available in-house or in the general labor pool, and/or it could drive staffing costs over the budget. The pandemic drove a massive rush to public cloud providers, which meant the demand for cloud migration skills exploded as well, driving up salaries and consulting fees. Moreover, the scarcity of qualified skills increases the risk that you won’t find the skills needed for cloud native systems builds, and/or the required level of talent will be unavailable to create optimized and efficient systems.


Leveraging small data for insights in a privacy-concerned world

While big data focuses on the huge volumes of information that individuals and consumers produce for businesses to look at and AI programs to sift through, small data is made up of far more accessible bite-sized chunks of information that humans can interpret to gain actionable insights. While big data can be a hindrance to small businesses due to its unstructured nature, masses of required storage space, and oftentimes the necessity of being held in SQL servers, small data holds plenty of appeal in that it can arrive ready to sort with no need for merging tables. It can also be stored on a local PC or database for ease of access. However, as it is generally stored within a company, it’s essential that businesses utilize the appropriate levels of cybersecurity to protect the privacy of their customers and to keep their confidential data safe. Maxim Manturov, head of investment research at Freedom Finance Europe has identified Palo Alto as a leading firm for businesses looking to protect their small data centrally. “Its security ecosystem includes the Prisma cloud security platform and the Cortex artificial intelligence AI-based threat detection platform,” Manturov notes.


EvilModel: Malware that Hides Undetected Inside Deep Learning Models

While this scenario is alarming enough, the team points out that attackers can also choose to publish an infected neural network on online public repositories like GitHub, where it can be downloaded on a larger scale. In addition, attackers can also deploy a more sophisticated form of delivery through what is known as a supply chain attack, or value-chain or third-party attack. This method involves having the malware-embedded models posing as automatic updates, which are then downloaded and installed onto target devices. ... The team notes, however, that it is possible to destroy the embedded malware by retraining and fine-tuning models after they are downloaded, as long as the infected neural network layers are not “frozen”, meaning that the parameters in these frozen layers are not updated during the next round of fine-tuning, leaving the embedded malware intact. “For professionals, the parameters of neurons can be changed through fine-tuning, pruning, model compression or other operations, thereby breaking the malware structure and preventing the malware from recovering normally,” said the team.


Moore’s Not Enough: ​4 New Laws of Computing

Law 1. Yule’s Law of Complementarity - From a strategic point of view, technology firms ultimately need to know which complementary element of their product to sell at a low price—and which complement to sell at a higher price. And, as the economist Bharat Anand points out in his celebrated 2016 book The Content Trap, proprietary complements tend to be more profitable than nonproprietary ones. ... Law 3. Evans’s Law of Modularity - Evans’s Law could be formulated as follows: The inflexibilities, incompatibilities, and rigidities of complex and/or monolithically structured technologies could be simplified by the modularization of the technology structures (and processes). ... In other words, modularization of software projects and the development process makes such endeavors more efficient. As outlined in a helpful 2016 Harvard Business Review article, the preconditions for an agile methodology are as follows: The problem to be solved is complex; the solutions are initially unknown, with product requirements evolving; the work can be modularized; and close collaboration with end users is feasible.


Cisco announces Wi-Fi 6E, private 5G to assist with hybrid work

The new Cisco Wi-Fi 6E products are the first high-end 6E access points that will assist businesses and their workers with high-traffic hybrid setups. The Wi-Fi 6E will also open up a new spectrum in the form of the 6GHz band. The access points will use the newly available spectrum that matches wired speeds and gigabit performance. Wi-Fi 6E will also greatly expand capacity and performance for the latest devices using collaborative applications designed for hybrid work and coupled with Cisco’s DNA Center and DNA Spaces. With these upgrades, the company is promoting collaboration in the offices, campuses and branches by delivering Internet of Things and operational technology benefits in smart buildings at scale. Also announced were three new varieties of the expansion of the Catalyst 9000x line of switches in the forms of the 9300x, the 9400x and the 9500x/9600x to help support the traffic brought in from increased wireless capacity. The 9300x will sport 48 Universal Power Over Ethernet (UPOE) ports for small to large campus access and aggregation deployments with Multigigabit Ethernet (mGig) speeds and 100G Uplink Modules in a stackable switching platform.


Supercomputers, AI and the metaverse: here’s what you need to know

Meta has promised a host of revolutionary uses of its supercomputer, from ultrafast gaming to instant and seamless translation of mind-bendingly large quantities of text, images and videos at once — think about a group of people simultaneously speaking different languages, and being able to communicate seamlessly. It could also be used to scan huge quantities of images or videos for harmful content, or identify one face within a huge crowd of people. The computer will also be key in developing next-generation AI models, it will power the Metaverse, and it will be a foundation upon which future metaverse technologies can rely. But the implications of all this power mean that there are serious ethical considerations for the use of Meta’s supercomputer, and for supercomputers more generally. ... The age of AI also brings with it key questions about human privacy and the privacy of our thoughts. To address these concerns, we must seriously examine our interaction with AI. When we look at the ethical structures of AI, we must ensure its usage is transparent, explainable, bias-free, and accountable.



Quote for the day:

"Strong leaders encourage you to do things for your own benefit, not just theirs." -- Tim Tebow

Daily Tech Digest - January 26, 2022

Science Made Simple: What Is Exascale Computing?

Exascale computing is unimaginably faster than that. “Exa” means 18 zeros. That means an exascale computer can perform more than 1,000,000,000,000,000,000 FLOPS, or 1 exaFLOP. That is more than one million times faster than ASCI Red’s peak performance in 1996. Building a computer this powerful isn’t easy. When scientists started thinking seriously about exascale computers, they predicted these computers might need as much energy as up to 50 homes would use. That figure has been slashed, thanks to ongoing research with computer vendors. Scientists also need ways to ensure exascale computers are reliable, despite the huge number of components they contain. In addition, they must find ways to move data between processors and storage fast enough to prevent slowdowns. Why do we need exascale computers? The challenges facing our world and the most complex scientific research questions need more and more computer power to solve. Exascale supercomputers will allow scientists to create more realistic Earth system and climate models. 


What CISA Incident Response Playbooks Mean for Your Organization

Most of the time, organizations struggle to exercise their incident response and vulnerability management plans. An organization can have the best playbook out there, but if it doesn’t exercise it on a regular basis, well, ‘If you don’t use it, you lose it’. It needs to make sure that its playbooks have the proper scope so that everyone from executives to everyone else within the organization knows what they need to know… When I say ‘exercise’, it’s important that organizations test their plans under realistic conditions. I’m not saying they need to unplug a device or bring in simulated bad code. They just need to make sure everyone tasked in the playbook knows what’s going on, understands what their roles are and periodically tests the plans. They can take the lessons they’ve learned and refine them. Incident response exercises don’t end with victory. They end with lessons for the future. Ultimately, documents that sit on a shelf rarely get read. To be high-performing, industry, government and critical infrastructure organizations need to continue to test their technology, processes and people.


Is Remix JS the Next Framework for You?

While the concept of a route is not new in any web framework really, the definition of one begins in Remix by creating the file that will contain its handler function. As long as you define the file inside the right folder, the framework will automatically create the route for you. And to define the right handler function, all you have to remember, is to export it as a default export. ... For static content, the above code snippet is fantastic, but if you’re looking to create a web application, you’ll need some dynamic behavior. And that is where Loaders and Actions come into play. Both are functions that if you export them, the handler will execute before its actual code. These functions receive multiple parameters, including the HTTP request and the URLs params and payloads. The loader function is specifically called for GET verbs on routes and it’s used to get data from a particular source (i.e reading from disk, querying a database, etc). The function gets executed by Remix, but you can access the results by calling the useLoaderData function. 


3 Fintech Trends of 2022 as seen by the legal industry

User consent is the foundation of open banking, whilst transparency as to where their data goes and who it is shared with is a necessary pre-condition of customer trust. The fintech sector should avoid following in the footsteps of the ad-tech industry, where entire ecosystems were built with a disregard for individuals’ rights and badly worded consent requests. Here, data collected by tracking technologies sunk into the ad-tech ecosystems without a trace, leaving privacy notices so confusing and complex that even seasoned data protection lawyers struggled to understand them. The full potential of open banking can only happen if financial ecosystems are built on transparency which gives users control over who can access their financial data and how it can be used. ... Innovative fintech solutions will need to strike the right balance between the need for regulatory compliance regarding consent, authentications, security and transparency on the one hand, and seamless user experience on the other, in particular when more complex ecosystems and relationships between various products start emerging.


Short-Sightedness Is Failing Data Governance; a Paradigm Shift Can Rectify It

“While organisations understand that data governance is important, many in the region feel that they have invested enough. And that's why data governance implementations are failing because it's still seen largely as an expense,” says Budge in an exclusive interview with Data & Storage Asean. “There's no doubt that it is a significant expense but rightly so, given that so much of digital transformation success is hinged on the proper deployment and consistent execution of a data governance program. Essentially, data governance is not a one-off investment—something you build and walk away—but requires actual ongoing practice and oversight.” Budge adds: “Executives often see only the upfront costs. For the short-sighted, the costs alone are reason enough to curtail further investment. ...” This short-sightedness, though, is not the only reason data governance is largely failing. Another pain point is what Budge describes as “the lack of understanding of the importance of a sound data governance strategy and the value that it can drive.”


Meta is developing a record-breaking supercomputer to power the metaverse

According to Meta, realizing the benefits of self-supervised learning and transformer-based models requires various domains — whether vision, speech, language, or for critical applications like identifying harmful content. AI at Meta’s scale will require massively powerful computing solutions capable of instantly analyzing ever-increasing amounts of data. Meta’s RSC is a breakthrough in supercomputing that will lead to new technologies and customer experiences enabled by AI, said Lee. “Scale is important here in multiple ways,” said Lee. ... “Secondly, AI projects depend on large volumes of data — with more varied and complete data sets providing better results. Thirdly, all of this infrastructure has to be managed at the end of the day, and so space and power efficiency and simplicity of management at scale is critical as well. Each of these elements is equally important, whether in a more traditional enterprise project or operating at Meta’s scale,” Lee said.


How AI Will Impact Your Daily Life In The 2020s

Every single sector of the economy will be transformed by AI and 5G in the next few years. Autonomous vehicles may result in reduced demand for cars and car parking spaces within towns and cities will be freed up for other usage. It maybe that people will not own a car and rather opt to pay a fee for a car pooling or ride share option whereby an autonomous vehicle will pick them up take them to work or shopping and then rather than have the vehicle remain stationary in a car park, the same vehicle will move onto its next customer journey. The interior of the car will use AR with Holographic technologies to provide an immersive and personalised experience using AI to provide targeted and location-based marketing to support local stores and restaurants. Machine to machine communication will be a reality with computers on board vehicles exchanging braking, speed, location and other relevant road data with each other and techniques such as multi-agent Deep Reinforcement Learning may be used to optimise the decision making by the autonomous vehicles.


My New Big Brain Way To Handle Big Data Creatively In Julia

In 2022, 8-gigabytes of memory is quite a low amount, but usually this is not such a big hindrance until the point where it would likely be a big hindrance for someone else. Really what has happened is that Julia has spoiled us. I know that I can pass fifty-million observations through something with no questions, comments, or concerns from my processor in Julia, no problem. It is the memory, however, that I am often running into the limits of. That being said, I wanted to explore some ideas on decomposing an entire feature’s observations into a “ canonical form” of sorts, and I started researching precisely those topics. My findings in regards to ways to preserve memory have been pretty cool, so I thought it might be an interesting read to look at what all I have learned, and additionally a pretty nice idea I came up with myself. All of this code is part of my project, OddFrames.jl, which is just a Dataframes.jl alternative with more features, and I am almost ready to publish this package.


Four Trends For Cellular IoT In 2022

No-code applied to cellular IoT management is an alternative to API, an accessible route to automation for non-developer teams. According to Gartner, 41% of employees outside the IT function are customizing or building data or technology solutions. The interest and willingness are there. The tools increasingly so. Automation tools enable teams with minimal to no hand-coding experience to automate workflows that would previously wait in a backlog for the attention of a specialist developer. IoT needs scale; there can be no hold-ups or bottlenecks in bringing projects to completion. Applying the benefits of no-code to cellular IoT addresses this. There will always be a high demand for skilled software developers to tackle complex development projects. The transition to the cloud did not drop system administrators, and no-code solutions will not replace specialist software developers; development ability is still needed. The no-code opportunity is in repetitive tasks such as activating an IoT SIM card. Using no-code, this workflow can easily be automated and free up developer resources for more complex integrations.


Web3 – but Who’s Counting?

Regardless of the technology that eventually supports Web3, the key will be distribution; data can’t be trapped in a single place. Let me give you an example: data.world may seem like a Web 2.0 application. It’s collaborative, users generate content in the form of data and analysis, which can be loaded into our servers. That can feel like handing over control. However, unlike the case for today’s data brokers — Facebook, Amazon, etc. — you didn’t give up rights to your data; it is still yours to modify, restrict, or even delete at your discretion. More technically, data.world is built on the Semantic Web standards. This means that if you don’t want your data hosted by data.world, that’s just fine. Host it under some other SPARQL endpoint, give data.world a pointer to your data, and it will behave just the same as if it were hosted with us. Deny access to that endpoint — or just remove it — and it’s gone. This is not to say that data.world is the solution to Web3, here today; far from it. We still don’t really know what Web3 will turn out to be. But one thing is for certain — any Web3 platform will have to play in a world of distributed data.



Quote for the day:

"Small disciplines repeated with consistency every day lead to great achievements gained slowly over time." -- John C. Maxwell

Daily Tech Digest - December 24, 2021

A CIO’s Guide To Hybrid Work

CIOs reimagining an organization’s digital strategy need to ensure that their employees can communicate effectively and have complete access to resources needed to perform their jobs. This means that employees do not receive just their laptops and an email account but have full access to a complete tech stack and set of solutions that empower them to interact with their peers and customers. AI- and ML-powered solutions help enhance the employee experience by saving time for people to connect with their teams and helping infuse mental well-being along with a company’s values and purpose. The best way to understand whether your employees are well supported to carry on their job is by gathering feedback from them. Send out a simple form with both open and closed questions on the potential communication gaps, remote work support and access to available resources. Once you have all the information, analyze the gaps and improvement opportunities to pick the right tools. Make sure that the tools you choose integrate with your organization’s tech ecosystem while delivering value.


Whatever Happened to Business Supercomputers?

Supercomputers are primarily used in areas in which sizeable models are developed to make predictions involving a vast number of measurements, notes Francisco Webber, CEO at Cortical.io, a firm that specializes in extracting value from unstructured documents. “The same algorithm is applied over and over on many observational instances that can be computed in parallel," says Webber, hence the acceleration potential when run on large numbers of CPUs.” Supercomputer applications, he explains, can range from experiments in the Large Hadron Collider, which can generate up to a petabyte of data per day, to meteorology, where complex weather phenomena are broken down to the behavior of myriads of particles. There's also a growing interest in graphics processing unit (GPU)-and tensor processing unit (TPU)-based supercomputers. “These machines may be well suited to certain artificial intelligence and machine learning problems, such as training algorithms [and] analyzing large volumes of image data,” Buchholz says.


The State of Hybrid Workforce Security 2021

The time is right for IT leaders to turn to their teams and gain a clear understanding of what they actually have in place. While the initial response to the pandemic was reactionary, now is a moment to assess an organization’s app and security landscape and what is actually providing access to users no matter where they are, whether they’re at home, in the branch, or anywhere in between. Rationalizing the purpose and usage of solutions that are in place today provides a real opportunity for consolidation—one that did not seriously exist previously. Many organizations will be able to drive better outcomes around security posture, reducing risk, and improving total cost of ownership. Consolidating the number of disparate tools in use to provide secure user access improves security posture consistency and reduces the number of policies that have to be administered. Besides reducing needed multi-product training and management effort, a platform approach drives better economies of scale, resulting in a lower total cost of ownership. Net-net, consolidation delivers a far more effective approach for security.


What is Web3, is it the new phase of the Internet and why are Elon Musk and Jack Dorsey against it?

In the Web3 world, search engines, marketplaces and social networks will have no overriding overlord. So you can control your own data and have a single personalised account where you could flit from your emails to online shopping and social media, creating a public record of your activity on the blockchain system in the process. A blockchain is a secure database that is operated by users collectively and can be searched by anyone. People are also rewarded with tokens for participating. It comes in the form of a shared ledger that uses cryptography to secure information. This ledger takes the form of a series of records or “blocks” that are each added onto the previous block in the chain, hence the name. Each block contains a timestamp, data, and a hash. This is a unique identifier for all the contents of the block, sort of like a digital fingerprint. ... The idea of a decentralised internet may sound far-fetched but big tech companies are already betting big on it and even assembling Web3 teams.


Will A.I. Guarantees Our Humane Futures?

Both private firms and governments, which would be adopting A.I. drove technologies, could be attracted to the opportunity of violating the individual’s privacy and data security for their own selfish reasons. Large private corporations, especially technology and social media companies such as the big four of the big tech, which includes Google, Amazon, Apple, and Facebook, they’re already sitting on massive quantities of user data, which they’re looking to monetize, and such monetization of data in the name of customized services and targeted advertisements could have a disastrous impact on the user’s privacy and data security. The bigger threat will emerge when such sensitive user data is misused for social engineering to alter the customer's behavior and choices. ... Today, algorithms are so sophisticated that they can predict the user's next action based on their private data analysis. It’s very much possible to make use of such user data to nudge the individual discretely to alter his behavior and choices, and this has far-reaching implications for the economy, for society, and as well as for the security of a democratic nation.


Protection against the worst consequences of a cyberattack

Businesses need an incident response plan that will clearly outline the steps to be followed when a data breach occurs. By neglecting to do so, the organization will become the low hanging fruit that attackers go after. Even a rudimentary plan is better than no plan at all, and those without one will suffer a much higher impact. The incident response plan needs to outline the steps to be followed when a data breach occurs. Teams need to identify and classify data to understand what levels of protection are needed, a step that is regrettably missed all the time. For instance, personal identifiable customer information needs a different level of protection to the photos from the last Christmas party. Teams also need to maintain cyber hygiene through regular patching, and since 90% of breaches start with an email, it is very important to have email protection, multi-factor authentication and end-point protection to prevent any lateral movements by cybercriminals. Perhaps my biggest piece of advice is to have experienced personnel monitoring your environment 24/7, 365 days a year (including Christmas). 


Initial access brokers: How are IABs related to the rise in ransomware attacks?

Initial access brokers sell access to corporate networks to any person wanting to buy it. Initially, IABs were selling company access to cybercriminals with various interests: getting a foothold in a company to steal its intellectual property or corporate secrets (cyberespionage), finding accounting data allowing financial fraud or even just credit card numbers, adding corporate machines to some botnets, using the access to send spam, destroying data, etc. There are many cases for which buying access to a company can be interesting for a fraudster, but that was before the ransomware era. ... Ransomware groups saw an opportunity here to suddenly stop spending time on the initial compromise of companies and to focus on the internal deployment of their ransomware and sometimes the complete erasing of the companies' backup data. The cost for access is negligible compared with the ransom that is demanded of the victims. IAB activities became increasingly popular in the cybercriminal underground forums and marketplaces. 


8 Real Ways CIOs Can Drive Sustainability, Fight Climate Change

The concept of the circular economy has been around for a while, but it’s now taking off in a big way. NTT’s Lombard says that it’s a key to getting to net zero. This means establishing business and IT supply chains that focus on optimizing the lifespan of equipment, moving toward zero-emission closed loop recycling and curtailing e-waste. For example, there’s a growing second-hand market for high-end gear, including hyperscale infrastructure. Companies like IT Renew recertify these systems and place them under warranty. “Everyone wins,” says Lucas Beran, principal analyst at consulting firm Dell’Oro Group. “The original user gets two or three years of use; the buyer gets another three or four years -- all while TCO and the carbon footprint drop.” ... Data centers are expected to consume about 8% of the world's electricity by 2030. While refreshing legacy servers, optimizing data, virtualizing workloads, consolidating virtual machines and green hosting all deliver benefits, these strategies aren’t enough to tackle climate change. Organizations must fundamentally rethink data center design and function.


How Safety Became One of The Most Critical Smart City Applications

For cities, it can be challenging to ensure citizen and worker safety when natural disasters occur. Incidents such as hurricanes, floods, fires and gas leaks are unpredictable and often impossible to prevent. To put it in perspective, most people have lived through some disaster, with 87% of consumers saying they’ve been impacted by one in the last five years (not counting the COVID pandemic). Safety will only become more critical over the next few decades as natural disasters are becoming more frequent, intense and costly. Since 1970, the number of disasters worldwide has more than quadrupled to around 400 a year. Since 1998, natural disasters worldwide have killed more than 1.3 million people and left another 4.4 billion injured, homeless, displaced, or in need of emergency assistance. Smart sensors and advanced analytics can help communities better predict, prepare and respond to these emergency situations. For example, IoT sensors, such as pole tilt, electric distribution line, leak detection and air quality sensors, can be leveraged to mitigate risk minimize damage.


Avoiding Technical Bankruptcy: a Whole-Organization Perspective on Technical Debt

It is regrettable that the meaning of the technical debt metaphor has been diluted in this way, but in language as in life in general, pragmatics trump intentions. This is where we are: what counts as "technical debt" is largely just the by-product of normal software development. Of course, no-one wants code problems to accumulate in this way, so the question becomes: why do we seem to incur so much inadvertent technical debt? What is it about the way we do software development that leads to this unwanted result? These questions are important, since if we can go into technical debt, then it follows that we can become technically insolvent and go technically bankrupt. In fact, this is exactly what seems to be happening to many software development efforts. Ward Cunningham notes that "entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation". That stand-still is technical bankruptcy.



Quote for the day:

“When you take risks you learn that there will be times when you succeed and there will be times when you fail, and both are equally important.” -- Ellen DeGeneres

Daily Tech Digest - October 01, 2021

6 steps for third-party cyber risk management

Classify vendors based on the inherent risk they pose to the organization (i.e., risk that doesn’t take into account existing mitigations). To do this, create a scoping questionnaire that can be completed by the employee who owns the vendor relationship to capture vital information regarding the service being offered, the location and level of data being accessed, stored or processed, and other factors that indicate what kind of security assessment may be needed. Every vendor presents a different level of risk. For example vendors that provide critical services usually have access to sensitive information and therefore pose a larger threat to the organization. This is where a vendor risk questionnaire comes in. You can develop your own or use one of the templates available online. In certain cases your organization may be required to comply with standards like SOC2 Type 2, ISO 27001, NIST SP 800-53, NIST CSF, PCI-DSS, CSA CCM, etc. It’s also important that your questionnaire covers questions related to such frameworks and compliance requirements.


Incentivizing Developers is the Key to Better Security Practices

To help development teams improve their cybersecurity prowess, they must first be taught the necessary skills. Utilizing scaffolded learning, and tools like Just-in-Time (JiT) training can make this process much less painful, and helps to build upon existing knowledge in the right context. The principle of JiT is that developers are served the right knowledge at just the right time, for example, if a JiT developer training tool detects that a programmer is creating an insecure piece of code, or is accidentally introducing a vulnerability into their application, it can activate and show the developer how they could fix that problem, and how to write more secure code to perform that same function in the future. With a commitment to upskilling in place, the old methods of evaluating developers based solely on speed need to be eliminated. Instead, coders should be rewarded based on their ability to create secure code, with the best developers becoming security champions that help the rest of the team improve their skills. 


The Turbulent Past And Uncertain Future Of AI

Although deep-learning systems tend to be black boxes that make inferences in opaque and mystifying ways, neuro-symbolic systems enable users to look under the hood and understand how the AI reached its conclusions. The U.S. Army is particularly wary of relying on black-box systems, as Evan Ackerman describes in "How the U.S. Army Is Turning Robots Into Team Players," so Army researchers are investigating a variety of hybrid approaches to drive their robots and autonomous vehicles. Imagine if you could take one of the U.S. Army's road-clearing robots and ask it to make you a cup of coffee. That's a laughable proposition today, because deep-learning systems are built for narrow purposes and can't generalize their abilities from one task to another. What's more, learning a new task usually requires an AI to erase everything it knows about how to solve its prior task, a conundrum called catastrophic forgetting. At DeepMind, Google's London-based AI lab, the renowned roboticist Raia Hadsell is tackling this problem with a variety of sophisticated techniques.


Increase Your DevOps Productivity Using Infrastructure as Low Code

Often what people focus on around DevOps is the tooling element as this often leads people down the continuous integration and continuous delivery route, aka CI/CD. One of the most popular open-source CI/CD tools is Jenkins, which is an all-in-one automation server that brings together the various parts of the software development life cycle. There are endless tools available on the market that can fit into your DevOps processes and cover virtually any technology stock you can think of these days. As Jenkins is one of the most popular, let’s take a look at some of the pros and cons in comparison with other infrastructures as low code tools. With Jenkins being open source, this gives you full control over the platform and what you do with it. Unfortunately this also puts all the responsibility onto yourself to make sure it’s doing what it should be doing. Starting at the infrastructure level, this is something you have to host yourself, which naturally comes with an associated cost for the underlying resource.


Russian Scientists Use Supercomputer To Probe Limits of Google’s Quantum Processor

From the early days of numerical computing, quantum systems have appeared exceedingly difficult to emulate, though the precise reasons for this remain a subject of active research. Still, this apparently inherent difficulty of a classical computer to emulate a quantum system prompted several researchers to flip the narrative. Scientists such as Richard Feynman and Yuri Manin speculated in the early 1980s that the unknown ingredients which seem to make quantum computers hard to emulate using a classical computer could themselves be used as a computational resource. For example, a quantum processor should be good at simulating quantum systems, since they are governed by the same underlying principles. Such early ideas eventually led to Google and other tech giants creating prototype versions of the long-anticipated quantum processors. These modern devices are error-prone, they can only execute the simplest of quantum programs and each calculation must be repeated multiple times to average out the errors in order to eventually form an approximation.


The Eclat algorithm

In this article, you will learn everything that you need to know about the Eclat algorithm. Eclat stands for Equivalence Class Clustering and Bottom-Up Lattice Traversal and it is an algorithm for association rule mining (which also regroups frequent itemset mining). Association rule mining and frequent itemset mining are easiest to understand in their applications for basket analysis: the goal here is to understand which products are often bought together by shoppers. These association rules can then be used for example for recommender engines (in case of online shopping) or for store improvement for offline shopping. The ECLAT algorithm is not the first algorithm for association rule mining. The foundational algorithm in the domain is the Apriori algorithm. Since the Apriori algorithm is the first algorithm that was proposed in the domain, it has been improved upon in terms of computational efficiency (i.e. they made faster alternatives). There are two faster alternatives to the Apriori algorithm that are state-of-the-art: one of them is FP Growth and the other one is ECLAT.


Why Coding Interviews Are Getting So Hard?

If candidates are sheep, then interviewers are wolves. The sheep learn to run faster and faster because they want to survive, and so are the wolves. Years ago, there weren’t any interview practice materials. New grads would review their Data Structure and Algorithm textbooks to prepare for coding interviews. And we would turn to senior students who have been through some interview process to pick up some wisdom. ... If you are an interviewer, try to avoid problems that are easily available on the internet or at least tweak them before using them. Try to avoid problems that clearly require practicing, i.e., dynamic programming. Try to focus less on whether a problem is solved perfectly but instead pay more attention to how candidates think and approach the problem. If you are a candidate, prepare for the interviews as hard as you can! Frankly speaking, that may not be the best way to use your time. But you need to do what you need to do. And after the interview, don’t share the problems. The world is big and pretty diverse. The discussions above are based on my very limited experience. And they might be wrong in a different context.


For networking pros, every month is Cybersecurity Awareness Month

Not sure why the organizers didn’t make “Cybersecurity First” the theme of the month’s first week, but it is not for me to second-guess the federal Cybersecurity & Infrastructure Security Agency (CISA) and the public/private National Cyber Security Alliance (NCSA), organizers of the annual awareness month. NCSAM is a great idea, just as is Bat Appreciation Month, Church Library Month, and International Walk to School Month, all of which also occur in October. It’s always good to be reminded that precautions and safeguards are needed when navigating a sometimes dangerous digital world. And that walking to school benefits students physically and mentally. For enterprise professionals, of course, every month is Cybersecurity Awareness Month. Security constantly is on the minds of enterprise IT pros, if not the minds of enterprise workers (sore subject!). And well it should be, coming off a year described by the CrowdStrike 2021 Global Threat Report as “perhaps the most active year in memory.”


Cloud computing in manufacturing: from impossible to indispensable

Advancements in infrastructure, combined with the exponential growth of software offerings in the cloud, has accelerated the digitisation of the supply chains, allowing companies to operate and interact with each other in a more transparent and automated way. Companies are quickly expanding their operational intelligence, moving from single assets descriptive analytics – where manufacturers are informed of what has happened; to prescriptive analytics – where manufacturers are informed of options to respond to what’s about to happen; across multiple lines, factories, all the way to critical elements of their supply chain. The exponential value creation cycle enabled by the Cloud Continuum does not depend on IT only. It requires organisations to have a well-defined vision, an adequate operating model, and a properly designed set of technology adoption principles. The adoption of cloud solutions without these three components usually leads to difficulty scaling and sustaining the intended benefits. In summary, cloud adoption in manufacturing went from a concept deemed impossible


Today’s cars are mobile data centers, and that data needs to be protected

The utopian vision of the AV paradigm removing the stress of having to pilot the vehicle, improving road safety, and managing urban traffic flows has already given rise to what manufacturers are referring to as the “passenger economy”. While we are chauffeured by software, we will be able to work, shop, and play from the comfort of our seats within continuous network connectivity. Independent of our own data demand, our vehicles will also be communicating and receiving sensor and telemetry data with other vehicles to avoid collisions, with our smart cities to ensure an efficient journey time, and with the manufacturer to schedule maintenance and contribute to the next generation of car design. All this critical data, however, could form the basis of a dystopian nightmare. Compromised applications might disable the software controlling safety systems on which AVs will depend. Knowledge of the driver’s identity, social media streams, and location might proliferate an avalanche of targeted advertising from local services, a loss of privacy, and potentially compromised personal safety. 



Quote for the day:

"Leaders dig into their business to learn painful realities rather than peaceful illusion." -- Orrin Woodward

Daily Tech Digest - September 02, 2021

Cyber Security In Cars

ISO/SAE 21434, Road vehicles – Cybersecurity engineering, addresses the cybersecurity perspective in engineering of electrical and electronic (E/E) systems within road vehicles. It will help manufacturers keep abreast of changing technologies and cyber-attack methods, and defines the vocabulary, objectives, requirements and guidelines related to cybersecurity engineering for a common understanding throughout the supply chain. The standard, developed in collaboration with SAE International, a global association of engineers and a key ISO partner, draws on the recommendations detailed in SAE J3061, Cybersecurity guidebook for cyber-physical vehicle systems, offering more comprehensive guidance and the input of experts all around the world. Dr Gido Scharfenberger-Fabian, Convenor of the group of ISO experts that developed the standard, said it will enable organizations to define cybersecurity policies and processes, manage cybersecurity risk and foster a cybersecurity culture. “ISO/SAE 21434 will help consider cybersecurity issues at every stage of the development process and in the field, increasing the vehicle’s own cybersecurity defences and mitigating the risk of potential vulnerabilities for every component,” he said.


Ultimate Guide to Becoming a DevOps Engineer

The job title DevOps Engineer is thrown around a lot and it means different things to different people. Some people claim that the title DevOps Engineer shouldn’t exist, because DevOps is ‘a culture’ or ‘a way of working’—not a role. The same people would argue that creating an additional silo defeats the purpose of overlapping responsibilities and having different teams working together. These arguments are not wrong. In fact, some companies that understand and do DevOps engineering very well don’t even have a role with that name (like Google!). The truth is that whenever you see DevOps Engineer jobs advertised, the ad might actually be for an infrastructure engineer, a systems reliability engineer (SRE), a CI/CD engineer, a sysadmin, etc. So the definition for DevOps engineer is rather broad. One thing that’s certain though is to be a DevOps engineer, you must have a solid understanding of the DevOps culture and practices and you should be able to bridge any communication gaps between teams in order to achieve software delivery velocity. 


WhatsApp fined a record 225 mln euro by Ireland over privacy

A WhatsApp spokesperson said in a statement the issues in question related to policies in place in 2018 and the company had provided comprehensive information. "We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate," the spokesperson said. EU privacy watchdog the European Data Protection Board said it had given several pointers to the Irish agency in July to address criticism from its peers for taking too long to decide in cases involving tech giants and for not fining them enough for any breaches. It said a WhatsApp fine should take into account Facebook's turnover and that the company should be given three months instead of six months to comply. Europe's landmark privacy rules, known as GDPR, are finally showing some teeth even if the lead regulator for some tech giants appears otherwise, said Ulrich Kelber, Germany's federal commissioner for data protection and freedom of information. "What is important now is that the many other open cases on WhatsApp in Ireland are finally decided on so that we can take faster and longer strides towards the uniform enforcement of data protection law in Europe," he told Reuters.


DevOps, Low-Code and RPA: Pros and Cons

RPA programs enable companies to automate repetitive tasks by creating software scripts using a recorder. For those of us who remember using the macro recorder in Microsoft Excel, it’s a similar concept. Once the script is created, users can then use a visual editor to modify, reorder and edit its steps. Speaking to the growing popularity of these solutions was the UiPath IPO on April 21, 2021, which ended up being one of the largest software IOPs in history. The use cases for RPA programs are unlimited—any repetitive task done via a UI is a candidate. RPA is an area where we’ve seen an intersection of business-user designed apps (UiPath and Blue Prism) with more traditional DevOps tools specifically in the test automation space (Tricentis, Worksoft, and Egglplant) and new conversational-based solutions like Krista. In the case of test automation, a lightweight recorder is given to a business user who can then record a business process. The recording is then fed to the automation team, which creates a hardened test case that in turn is fed into a CI/CD system.


IBM quantum computing: From healthcare to automotive to energy, real use cases are in play

Quantum computers are better at that than classical computers, Utz said. Anthem is running different models on IBM's quantum cloud. Right now, company officials are building a roadmap around how Anthem wants to deliver its platform using quantum technology, so "I can't say quantum is ready for primetime yet," Utz said. "The plan is to get there over the next year or so and have something working in production." A good place to start with anomaly detection is in finding fraud, he said. "Classical computers will tap out at some point and can't get to the same place as quantum computers." Other use cases are around longitudinal population health modeling, meaning that as Anthem looks at providing more of a digital platform for health, one of the challenges is that there is "almost an infinite number of relationships," he said. This includes different health conditions, providers patients see, outcomes and figuring out where there are outliers, he said. "There's only so much a classical system can do there, so we're looking for more opportunities to improve healthcare for our members and the population at large," and the ability to proactively predict risk, Utz said. 


How to Implement Domain-Driven Design (DDD) in Golang

Domain-Driven Design is a way of structuring and modeling the software after the Domain it belongs to. What this means is that a domain first has to be considered for the software that is written. The domain is the topic or problem that the software intends to work on. The software should be written to reflect the domain. DDD advocates that the engineering team has to meet up with the Subject Matter Experts, SME, which are the experts inside the domain. The reason for this is because the SME holds the knowledge about the domain and that knowledge should be reflected in the software. It makes pretty much sense when you think about it, If I were to build a stock trading platform, do I as an engineer know the domain well enough to build a good stock trading platform? The platform would probably be a lot better off if I had a few sessions with Warren Buffet about the domain The architecture in the code should also reflect on the domain.

 

China’s Personal Information Protection Law and Its Global Impact

The law’s restrictions on cross-border data transfers may not affect retailers that operate domestically, and hence have no need to transfer information abroad. However, the story is vastly different for two types of companies: those in possession of large amount of personal information and those in possession of information on critical infrastructure. Moreover, PIPL declares that the authority of domestic regulators supersedes that of international treaties. PIPL will help foreign companies operating in China without cross-border data transfers to develop privacy policies in compliance with the law. Before PIPL, the lack of a domestic PI protection law led to the broad adoption of the EU’s GDPR as a privacy policy among foreign companies. However, the GDPR’s decision-making is based on agreements among EU member states, which does not apply in the case of China. Since PIPL will come into effect in November 2021, foreign firms in China will need to revise their privacy policies to fit the requirements of the new law.


10 Characteristics of an AI-Powered Enterprise

Digital transformation makes the inclusion of AI as part of the business strategy even more important than it would be otherwise because digital organizations are software companies. Since commercial applications and tools are increasingly taking advantage of AI, the logical development by extension is AI embedded in enterprise-built applications. After all, businesses are moving more data and compute to the cloud and their new applications are being designed as cloud-first applications. Of course, AI and machine learning tooling is also available in the cloud, so developers have what they need to build “intelligent” applications. AI and machine learning don't just work, however. They require testing and monitoring. “Losing trust in AI-infused applications is a high risk for AI-based innovation,” said Diego Lo Giudice, VP and principal analyst at Forrester, in a blog post. “Forrester Analytics data shows that 73% of enterprises claim to be adopting AI for building new solutions in 2021, up from 68% in 2020, and testing those AI-infused applications becomes even more critical.” Trust and safety are things that need to be proven through testing.


Why Rust is the best language for IoT development

Internet of Things (IoT) technology is rapidly terraforming the landscape of modern society right in front of our very eyes, and propelling us all into the future. It does this by providing solutions to everything from tracking your daily personal fitness goals with an Apple watch, to completely revolutionising the entire transport sector. These devices connect to each other and form the great network required for something like a digital twin; they are constantly collating data in real time from the surrounding environment which means that the system is always using entirely current information. As amazing and powerful as this technology is, it is slightly held back by the fact that, by their very nature, IoT devices have far less processing power than your average piece of equipment. This requires a much more efficient code to be written to fully take advantage of its raw potential without affecting the device’s performance. This is where Rust comes into the picture as one of the very few languages that can provide a faster runtime for IoT technology.


Are Tesla’s Dojo supercomputer claims valid?

The D1, according to Tesla, features 362teraFLOPS of processing power. This means it can perform 362 trillion floating-point operations per second (FLOPS), Tesla says. Now imagine harnessing the processing power of 25 D1 chips into a training tile, and then linking together 120 training tiles through multiple servers. That’s what Tesla is doing with the Dojo supercomputer for its autonomous cars. And with each training tile containing 9PFLOPS of computing power, Dojo has (by my possibly inaccurate calculations) 1.08 exaFLOPS of power under its hood (Tesla calls it 1.1EFLOPS). That kind of horsepower would make Dojo more than twice as fast as the currently acknowledged fastest supercomputer in the world, Fugaku. Built by Fujitsu, this supercomputer reaches speeds of 442PFLOPS. Supercomputers already are being used to accelerate medical research and drug development because they are capable of quickly processing massive amounts of data. Indeed, researchers have relied on supercomputers to power COVID-19 research since the pandemic began in early 2020.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing." -- Reed Markham