Daily Tech Digest - January 31, 2022

Driving digital transformation: The power of blockchain

While the BFSI sector is at the forefront of blockchain adoption, industries like healthcare can also benefit from DLT. Tech Mahindra recently partnered with StaTwig to manage the traceability of the global COVID-19 vaccine supply through blockchain technology. In addition to improving transparency across the supply chain, the VaccineLedger solution also helps prevent issues such as expired vaccines being mistakenly distributed and used. With blockchain technology, health institutions have complete traceability of the vaccination’s journey from sourcing to the hospital floor. ... Sectors like manufacturing are also finding value in blockchain technology. With blockchain, manufacturers and suppliers can trace components and raw materials through the entire remanufacturing process, ensuring that parts can be traced back to a point of origin in case of a product recall or malfunction. TradeLens, for example, a blockchain network for global shipments, has been adopted by dozens of global carriers, customs authorities, freight forwarders, and port authorities. 

Meet Cadence: Workflow Engine for Taming Complex Processes

Cadence is an open source fault-oblivious stateful code platform and workflow engine specifically designed to solve this development challenge. Originally developed and open-sourced by Uber — and now adopted and developed by an increasing number of companies including Uber and Instaclustr — Cadence can abstract away the most difficult complexities associated with developing high-scale distributed applications. Cadence preserves the entire state of an application in durable virtual memory not associated with any specific process. The stored application state includes all call parameters and returned results for user-defined activities. It then uses that information to catch up and replay workflows that get interrupted. Cadence has libraries that enable developers to create and coordinate workflows using popular languages such as Java, Go, Python and Ruby. Cadence services, such as workers, are largely stateless and leverage a data store for task/workflow persistence. Supported storage options include open source Cassandra and MySQL/PostgreSQL, and an adapter is available for any database featuring multi-row single shard transactions. 

Extracting value from unstructured data with intelligent data management

The explosion of data is predicted to reach 175 zettabytes by 2025 and is increasingly stored across disparate, hard-to-access, silos. Visibility in the current ecosystem is poor. This growth of data has been fueled by market forces that look to capitalise on the value that can be extracted from the valuable resource. This is mirrored by the dramatic shift to the cloud and edge. An estimated 90% of this data is unstructured information, like text, video, audio, web server logs, social media and more. And, all this data can’t be moved to a central data store or processed in its entirety. Now, unstructured data management, for data-heavy organisations, is an enterprise IT priority. They need to identify, index, tag and monetise this information. Komprise, an intelligent data management and mobility company, says it’s offering can solve this challenge: “We dramatically save our customers money by tiering cold data to the cloud, in a transparent, native AI/ML ready solution that doesn’t sit in front of the hot data,” said Kumar Goswami, CEO and co-founder.

How to Create a Network Proxy Using Stream Processor Pipy

Every pipeline gets access to the same set of variables across a Pipy instance. In other words, contexts have the same shape. When you start a Pipy instance, the first thing you do is to define the shape of the context by defining variable(s) and their initial values. Every root pipeline clones the initial context you define at the start. When a sub-pipeline starts, it either shares or clones its parent’s context, depending on which joint filter you use. For instance, a link filter shares its parent’s context while a demux filter clones it. To the scripts embedded in a pipeline, these context variables are their global variables, which means that these variables are always accessible to scripts from anywhere as long as they live in the same script file. This might seem odd to a seasoned programmer because global variables usually mean they are globally unique. You have only one set of these variables, whereas in Pipy we can have many sets of them (aka contexts) depending on how many root pipelines are open for incoming network connections and how many sub-pipelines clone their parents’ contexts.


More Security Flaws Found in Apple's OS Technologies

Apple said it has implemented an improved validation mechanism in macOS Monterey 12.2 to address the issue. The company has credited two other researchers — one from Trend Micro and another anonymous individual — for reporting the flaw to the company. Meanwhile, one of the two zero-day flaws (CVE-2022-22587) that Apple fixed this week involved IOMobileFrameBuffer, a kernel extension related to a device's frame buffer. The memory corruption bug allows attackers to run arbitrary code at the kernel level and is likely being actively exploited in the wild already, Apple said. The bug impacts macOS Monterey, iPhone 6 and later, all iPad Pro models, and several other Apple mobile devices. "CVE-2022-22587 targets the macOS kernel, and compromising it can give the attacker root privileges," Levin says. "However, SIP comes into play exactly for this kind of exploit." The flaw is one of several serious vulnerabilities that researchers have uncovered in IOMobileFrameBuffer recently. Other examples include CVE-2021-30883, a zero-day code execution bug that Apple patched last October amid active exploit activity, and CVE-2021-30807, which Apple fixed last July.

Why vulnerability scanners aren’t enough to prevent a ransomware attack on your business

Vulnerability scanners are needed in most security toolkits. However, reactively detecting and alerting organizations to the presence of vulnerabilities means companies cannot keep up. Vulnerability scanners are akin to equipping security teams with an alarm system that’s constantly flashing lights and sounding sirens everywhere – so many alerts at once that it overwhelms security operations. Given the significant transitions many organizations’ digital infrastructures are undergoing, along with the complex and quickly evolving threat landscape, a scan-and-patch approach reliant on vulnerability scanners as a first line of defense is simply insufficient to protect organizations from current and future threats. As such, relying on vulnerability scanners is a dangerous strategy in the modern era, when vulnerabilities are actively and regularly weaponized for successful ransomware attacks. The dynamic shift in the threat landscape requires an equally dynamic shift in how organizations approach their cybersecurity programs.

Shipment-Delivery Scams Become the Favored Way to Spread Malware

Researchers attributed a couple of factors behind the ramp-up in scams related to package delivery. Spoofing DHL certainly made sense in the fourth quarter of last year during the busy holiday-shopping season, noted Jeremey Fuchs, cybersecurity researcher and analyst from Avanan, in a report on the latest DHL-related scam, published Thursday. “Now, hackers are taking advantage of this, by attaching malware to a DHL spoof,” which will likely attract attention from a recipient in part because of its use of a trusted company, he wrote in the report. Moreover, shipping delays and supply-chain issues have become commonplace during the pandemic, which also has spurred a massive increase in people working remotely from home. Attaching a malicious invoice link to a fake USPS missed-delivery notification, then – as threat actors did in the recently discovered Trickbot campaign – would be an attractive lure for potential victims accustomed to receiving these types of emails, according to Cofense. “With the supply-chain delays, receiving a notification that a delivery attempt was missed can lead to frustration and entice the recipient to open the invoice link to further investigate,” Cofense PDC researchers Andy Mann and Schyler Gallant wrote in the report.

Do machines feel pain?

“Do machines feel pain?” is a very philosophical question, says Anuj Gupta, Head of AI, Vahan. “Some robots react when hit. Does it mean they ‘feel’ pain – no. Their reaction is a combination of sensors and software. It is like a toy that reacts to one’s hand gestures. Currently, machines can’t feel anything. They can be programmed to trick humans by simulating human emotions, including pain.” Few years ago, scientists from Nanyang Technological University, Singapore, developed ‘mini-brains’ to help robots recognise pain and activate self-repair. The approach embeds AI into the sensor nodes, connected to multiple small, less-powerful processing units that act like ‘mini-brains’ on the robotic skin. Then, combining the system with self-healing ion gel material, robots, when damaged, can recover their mechanical functions without human intervention. Explaining the ‘mini-brains’, co-author of the study, Associate Professor Arindam Basu, from the School of Electrical & Electronic Engineering of the university, says, “If robots have to work with humans, there is a concern if they would interact safely. To ensure a safe environment, scientists worldwide have been finding ways to bring a sense of awareness to robots, including feeling pain, reacting to it, and withstanding harsh operating conditions.

AI storage: a new requirement for the shift in computing and analytics

Traditionally, DDN Storage has focused on traditional data storage for unstructured data and big data in enterprise, government and academic sectors. Now, it is redefining the imperatives that are driving it as a company, focusing on AI storage, with its solution, AI, which is at the heart of its growth strategy. In action, over the last two years DDN has acted as the core backend storage system for NVIDIA to increase performance & scale and flexibility to drive innovation. NVIDIA commands “nearly 100%” of the market for training AI algorithms and has multiple AI clusters, according to Karl Freund, analyst at Cambrian AI Research. Following this success, DDN is powering the UK’s most powerful supercomputer, Cambridge 1, which went live in 2021 and is focused on transforming AI-based healthcare research. The AI storage vendor is also working with Recursion, the drug discovery company. “Our at-scale data needs require fast ingest, optimised processing and reduced application run times,” said Kris Howard, Systems Engineer at Recursion. Working with DDN, the drug discovery company achieved up to 20x less costs and raised the possibilities for accelerating the drug discovery pipeline with new levels of AI capability.

Chaos Engineering Has Evolved Since Netflix's Chaos Monkey Days

Chaos engineering or failure injection does not necessarily need to kill things or break services. Jason mentioned introducing latency and seeing how your application will behave. It's a normal situation when the database becomes slow because either there may be saturation of network because someone else all of a sudden starts using the same channel to your database. Or maybe it becomes slower because the indexes in this database are not optimized and things like that. When we develop a certain feature, we often think under the condition that everything will be, you know, nice and green and that it would be sunny with unicorns and things like that. However, simply looking at how your system will behave by increasing the latency between components already would be like a big eye-opener. This is especially true regarding how certain things are configured in your system or some of the requests you think need to be synchronous. And when you're testing this on a unit or integration test, you see the response time would be very quick.

Quote for the day:

"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward

Daily Tech Digest - January 30, 2022

Machine learning is going real-time: Here's why and how

ML systems need to have two components to be able to do that, Huyen notes. They need fast inference, i.e. models that can make predictions in the order of milliseconds. And they also need real-time pipelines, i.e. pipelines that can process data, input it into models, and return a prediction in real-time. To achieve faster inference, Huyen goes on to add, models can be made faster, they can be made smaller, or hardware can be made faster. The focus on inference, TinyML, and AI chips that we've been covering in this column is perfectly aligned to this, and naturally, these approaches are not mutually exclusive either. Huyen also embarked on an analysis on streaming fundamentals and frameworks, something that has also seen wide coverage on this column from early on. Many companies are switching from batch processing to stream processing, from request-driven architecture to event-driven architecture, and this is tied to the popularity of frameworks such as Apache Kafka and Apache Flink. This change is still slow in the US but much faster in China, Huyen notes.

Whistleblowers can protect crypto and DeFi

While the industry frets over this counterrevolution of sorts, crypto insiders who report fraud and illegal activity to the government could see significant upside. Regulators, such as the SEC, the CFTC, the Financial Crimes Enforcement Network, and the Internal Revenue Service, need whistleblowers who can provide an inside look at the operations of a company or industry segment, helping regulators identify fraud and illegal activities well before wrongdoers irreparably injure investors, customers and the public. Information from insiders can also help regulators target their enforcement actions and rulemaking to address the worst actors in the space, which can help prevent regulators from unnecessarily quashing innovative and valuable aspects of the cryptocurrency industry. In exchange for this information, whistleblowers can earn awards under various federal whistleblower rewards programs, provided the whistleblower properly filed a tip that contributed to a qualifying enforcement action. In the case of the SEC and CFTC programs, and now the newly enhanced AML whistleblower program, a whistleblower can receive an award of up to 30% of an enforcement action of more than $1 million.

Remove System Complexity with The “Impedance Mismatch Test”

Everyone has data pipelines compiled of lots of different systems. Some may even look very sophisticated on the surface, but the reality is there’s lots of complexity to them––and maybe unnecessarily so. Between the plumbing work to connect different components, the constant performance monitoring required, or the large team with unique expertise to run, debug and manage them, all these factors can add time-to-market delays and operational overhead for product teams. And that’s not all. The more systems you use, the more places you are duplicating your data, which increases the chances of data going out-of-sync or stale. Further, since components may be developed independently by different companies, the upgrades or bug fixes might break your pipeline and data layer. ... The variables such as the data format, schema and protocol add up to what’s called the “transformation overhead.” Other variables like performance, durability and scalability add up to what’s called the “pipeline overhead.” Put together, these classifications contribute to what’s known as the “impedance mismatch.” 

New SEC Proposal Could Be a Disaster for DeFi Exchanges

Under this new definition, decentralized exchanges such as Uniswap would be subject to SEC regulations and would therefore need to register with the SEC as a securities broker. As decentralized exchanges have no way of complying with the current demands placed on securities exchanges by the SEC, the new legislation would effectively kill decentralized exchanges operating within the United States. DeFi enthusiast Gabriel Shapiro highlighted the potential devastating effects of the proposal in a blog post, noting that “because the proposal achieves this expansion by providing new restraints on ‘communication protocols,’ I believe it may also be unconstitutional as a restraint on free speech,” taking a strong stance against the proposed changes. He also suggested that under the new definition, the SEC could class block explorers, such as Etherscan, as securities exchanges because they allow users to interact with smart contracts to communicate trading interests. Shapiro is not the only prominent figure to come out against the SEC’s proposed legislation. 

Accessing And Retaining Knowledge Is Vital For Businesses In The Era Of The Great Reshuffle

In many businesses, when an employee moves to a new job, all that’s left behind is a digital shadow. Their knowledge, expertise and experience disappear, and new hires and old colleagues alike struggle to fill the gaps. A trail of data breadcrumbs that lead to nowhere — old messages, outdated docs and dusty email chains — are often all busy ex-teammates are left to rely on. As a result, business productivity suffers. Of course, this isn’t the fault of the person who has moved roles. Their expertise belongs to them, and too often, organizations undervalue that expertise, further fuelling resignations. It’s in the hands of businesses to do more to retain business-critical knowledge and smooth the transition for new teammates. Nobody should be having to rely on guesswork from day one. And if they are, chances are they too won’t stick around for long. To overcome these challenges, we need to think innovatively and start optimizing our tech stacks to reduce knowledge drain and fast-track problem-solving. The solution isn’t more collaboration or communication apps. 

FBI Reportedly Considered Buying NSO Spyware

The yearlong investigation by Bergman and Mazzetti also alleges that a group of Israeli computer engineers arrived at a New Jersey building used by the bureau in June 2019 and started testing their equipment. The report alleges that the FBI had bought a version of Pegasus, NSO’s premier spying tool. "For nearly a decade, the Israeli firm had been selling its surveillance software on a subscription basis to law-enforcement and intelligence agencies around the world, promising that it could do what no one else - not a private company, not even a state intelligence service - could do: consistently and reliably crack the encrypted communications of any iPhone or Android smartphone," says the NYT report. As part of their training on the tool, bureau employees bought new smartphones, with SIM cards from other countries. This version of Pegasus that the FBI bought was zero click, i.e. it did not require users to click on a malicious attachment or link - so the users in the U.S. monitoring phones could see no evidence of an ongoing breach.

Zero Trust is hard but worth it

Keeping software updated is key to applying both these rules, and unfortunately that’s often a problem for enterprises. Desktop software, particularly with WFH, is always a challenge to update, but a combination of centralized software management and a scheduled review of software versions on home systems can help. For operations tools, don’t be tempted to skip versions in open source tools just because they seem to happen a lot. It’s smart to include a version review of critical operations software as part of your overall program of software management and take a close look at new versions at least every six months. Even with all of this, it’s unrealistic to assume that an enterprise can anticipate all the possible threats posed by all the possible bad actors. Preventing disease is best, but treating it once symptoms arise is essential, too. The most underused security principle is that preventing bad behavior means understanding good behavior. Whatever the source of a security problem, it almost always means that something is doing something it shouldn’t be. How can we know that? By watching for different patterns of behavior.

Apache Airflow and the New Data Engineering

The ELT steps can seem simple enough on the surface, but with a lot of moving parts, an increasing number of sources and increasing ways to use the data, a lot can go wrong. Data engineers need to contend with complex scheduling requirements, creating dependencies between tasks, figuring out what can run in parallel and what needs to run in series, what makes for a successful task run, how to checkpoint tasks and handle failures and restarts, how to check data quality, how and who to alert on fails -- all the stuff Airflow was designed to handle. The cloud only makes that process more complicated, with cloud buckets used to stage data from sources before loading that data into cloud-based distributed data management systems like Snowflake, Google Cloud Platform or Databricks. And here’s what I think is important: For many organizations, making the leap from exploratory data analysis [EDA] to formalizing what’s found into data pipelines has become increasing valuable.

Web3’s early promise for artists tainted by rampant stolen works and likenesses

Ironically, the decentralized markets selling NFTs are starting to centralize around one or two providers. One of the most popular, OpenSea, has a full takedown team dedicated to situations like York’s or Quinni’s. The company has taken off, reaching a stratospheric $13 billion valuation after a $300 million round in early January. The company is far and away the biggest player in the NFT market, with an estimated 1.26 million active users and over 80 million NFTs. According to DappRadar, the platform took in $3.27 billion in transactions in the last 30 days and managed 2.33 million transactions. Its nearest competitor, Rarible, saw $14.92 million in transactions in the same period. ... Interestingly, the company also seems to be cracking down on deep fakes or, as OpenSea calls it, non-consensual intimate imagery (NCII), a problem that hasn’t surfaced widely yet but could become pernicious for influencers and media stars. “We have a zero-tolerance policy for NCII,” they said. “NFTs using NCII or similar images (including images doctored to look like someone that they are not) are prohibited, and we move quickly to ban accounts that post this material.

Understanding Web3's Supporting Blockchain Technology

The benefits of a decentralized network are varied, but because they don’t have to go through a “trusted party,” nobody has to know or trust anyone else. Every person in the network has a copy of the distributed ledger which contains the exact same data. If a person’s ledger is altered or corrupted, it will be rejected by the other members in the network. One of the cons of a decentralized network is that the more members that are in a network, the slower the network tends to be. In decentralized blockchain systems, unlike distributed systems, security is prioritized over performance. When a blockchain network scales up or out, while the network becomes more secure, performance slows down. This is because every member node has to validate all of the data that is being added to the ledger. “Most references place blockchain squarely in the realm of currencies or finances, but the applicability is far greater,” said Perella.“When the world wide web came about, most websites were maintained by individuals or groups hosting their own systems and data. This format would eventually become known as Web 1.0. 

Quote for the day:

Integrity is the soul of leadership! Trust is the engine of leadership! - Amine A. Ayad

Daily Tech Digest - January 29, 2022

BotenaGo Botnet Code Leaked to GitHub, Impacting Millions of Devices

Researchers also found additional hacking tools, from several sources, collected in the same repository. Alien Labs called the malware source code “simple yet efficient,” able to carry out malware attacks with a grand total of a mere 2,891 lines of code (including empty lines and comments). In its November writeup, Alien Labs noted that BotenaGo, written in Google’s open-source Golang programming language, could exploit 33 vulnerabilities for initial access. The malware is light, easy to use and powerful. BotenaGo’s 2,891 lines of code are all that’s needed for a malware attack, including, but not limited to, installing a reverse shell and a telnet loader used to create a backdoor to receive commands from its command-and-control (C2) operator. Caspi explained that BotenaGo has automatic setup of its 33 exploits, presenting an attacker a “ready state” to attack a vulnerable target and infect it with an appropriate payload based on target type or operating system. The source code leaked to GitHub and depicted below features a “supported” list of vendors and software used by BotenaGo to target its exploits at a slew of routers and IoT devices.

The best IT skill for the 2020s? Become an 'evergreen' learner

For starters, the "soft" skills will matter in the months and years ahead. These include professional skills such as communication, leadership, and teamwork, says Don Jones, vice president of developer skills at Pluralsight. Then there is a need for "tech-adjacent skills, like a familiarity with project management and business analysis." Jones urges an "evergreen" approach to skills mastery, as technology evolves too quickly to commit to a single platform or solution set. "The biggest-impact skill is the ability to learn," he says. "There's no single tech skill you can invest in that won't change or be outdated in a year; your single biggest skill needs to be the ability to update skills and learn new skills." This also means placing a greater emphasis on emotional intelligence, as many emerging systems will be built on artificial intelligence, analytics, or automation that mimic human processes, therefore augmenting human workers. "Anyone can be taught to swap out memory, but the skill of communication and responding to human emotion is not a skill so easily taught," says Chris Lepotakis

Three things Web3 should fix in 2022

Web3 backers love to talk about how blockchain networks are computers that can be programmed to do anything you imagine, given superpowers by the fact that they are also decentralized. Ethereum was the first of these computers to get real traction, but it was quickly overwhelmed by traffic. Traffic is managed by charging fees to use the computer, and the fees to complete a single transaction on the Ethereum network can run over $100. Imagine spending $75 to create a “free” Facebook account and another $75 every time you wanted to post something, and you have a sense of what it would be like to participate in a social network on the blockchain today. Ethereum is in the midst of a transformation designed to make it more efficient — which is to say, faster, less expensive, and less wasteful of energy. In the meantime, technologists routinely appear announcing that they have built a more efficient blockchain. Solana, for example, is a company that raised $314 million last year to build what it calls “the fastest blockchain in the world.” With that in mind, let’s check in on how the fastest blockchain in the world was doing on Sunday, when the aforementioned crypto crash led many people to use it to buy and sell assets.

Five Data Governance Trends for Organizational Transformation in 2022

There is a growing challenge to better govern data as it increases in variety and volume, and there is an estimate that 7.5 septillion gigabytes of data is generated every single day. Moreover, in organizations, silos are getting created through multiple data lakes or data warehouses without the right guidelines, which will eventually be a challenge in managing this data growth. To achieve nimbleness, we can simplify the data landscape by using a semantic fabric, popularly called data fabric, based on a strong Metadata Management operating model. This can further make data interoperable between divisions and functions while working to a competitive advantage. Data fabric simplifies Data Management, across cloud and on-premise data sources, even though data is managed as domains. In addition, data democratization can be a strong enabler for managing data across domains with ease and making data available as well as interoperable. Allowing business users to source and consume relevant data for their instantaneous reporting or generation of insights can reduce significant turnaround time in acquiring or sourcing data traditionally.

How the metaverse could impact the world and the future of technology

The metaverse could potentially use virtual reality, or augmented reality as we know it now, to immerse users in an alternate world. The technology is still being developed, but companies like Meta say they are building and improving these devices. Meta's Oculus Quest, now in its second model, is one such device. "When you're in the metaverse, when you're in a virtual reality headset, you will feel like you're actually sitting in a room with someone else who can see you, who can see all of your nonverbal gestures, who you can respond to and mimic," Ratan said. Immersive worlds and creating online avatars is nothing new, as games like Grand Theft Auto Online, Minecraft and Roblox have already created virtual universes. Meta's announcement last October aims to go beyond entertainment, and create virtual workspaces, homes and experiences for all ages. "What's happening now is the metaverse for social media without gaming," Ratan said. "The new metaverse is designed to support any type of social interaction, whether that's hanging out with your friends or having a business meeting."

Use the Drift and Stability of Data to Build More Resilient Models

Data drift represents how a target data set is different from a source data set. For time-series data (the most common form of data powering ML models), drift is a measure of the “distance” of data at two different instances in time. The key takeaway is that drift is a singular, or point, measure of the distance between two different data distributions. While drift is a point measure, stability is a longitudinal metric. We believe resilient models should be powered by data attributes that exhibit low drift over time — such models, by definition, would exhibit less drift-induced misbehavior. In order to manifest this property, drift over time, we introduce the notion of data stability. Stable data attributes drift little over time, whereas unstable data is the opposite. We provide additional details below. Consider two different attributes: the daily temperature distribution in NYC in November (TEMPNovNYC) and the distribution of the tare weights of aircraft at public airports (AIRKG). It is easy to see that TEMPNovNYC has lower drift than AIRKG; one would expect lesser variation between November temperatures at NYC across various years, than between the weights of aircrafts at two airports.

How to become an AI influencer

An influencer has huge responsibilities to fill. As someone with a big following, it is important to understand the kind of impact they can have on their target audience, especially if they are young or just starting out in their career. Venkat Raman, co-founder of Aryma Labs, a data consulting firm, lists down a few things influencers should keep in mind while creating their content. Don’t give false hopes An influencer should not give people false hopes. He adds, “I see many posts and tweets where some influencers proclaim that one does not need to know advanced math to break into data science. The poor aspirants believe it, and when they face the tough curriculum, they give up. I think we need to be honest. This will help set the correct expectations.” ... Many influencers in the field teach statistics through their content. Statistics is one of the core foundations of data science. Raman adds, “I have seen even the most popular YouTubers teach statistics wrongly.” The foundation can’t be left shaky. The influencers owe it to their audience to teach the right stuff. Unfortunately, in the chase for ‘number of followers’ and pressure to create content every now and then, they end up creating substandard content.

‘Dark Herring’ Billing Malware Swims onto 105M Android Devices

On the technical side, once the Android application is installed and launched, a first-stage URL is loaded into a webview, which is hosted on Cloudfront, researchers said. The malware then sends an initial GET request to that URL, which sends back a response containing links to JavaScript files hosted on Amazon Web Services cloud instances. The application then fetches these resources, which it needs to proceed with the infection process — and specifically, to enable geo-targeting. “One of the JavaScript files instructs the application to get a unique identifier for the device by making a POST request to the “live/keylookup” API endpoint and then constructing a final-stage URL,” according to the analysis. “The baseurl variable is used to make a POST request that contains unique identifiers created by the application, to identify the device and the language and country details.” The response from that final-stage URL contains the configuration that the application will use to dictate its behavior, based on the victim’s details. Based on this configuration, a mobile webpage displayed to the victim, asking them to submit their phone number to activate the app (and the DCB charges).

4 ways to mature your digital automation strategy

Immature strategies focus on simple tasks. It’s a great place to start, but to get the most out of automation, it needs to grow. To evolve these task-based automations into automated workflows, applications and systems need to communicate with each other. Steadily adding connected systems provides the opportunity to build increasingly complex, end-to-end workflows. As more processes are connected, you will need a platform to manage the increasing complexity. Fortunately, vendors in different segments of enterprise IT are converging with offerings of business process automation (BPA) suites that include integration libraries and automation and workflow capabilities. This trend provides support for organizations building out their strategies and validates the importance of automation paired with connectivity. RPA bots are very popular because they are powerful and easy to use. This is both a blessing and a curse because RPA is often used when it shouldn’t be, leading to poorly designed processes. 

Integrating IoT in Your Business

If you look at the LoRaWAN ecosystem as a whole, we now have a few hundred hardware partners that have created off the shelf products. So the first one, we say, okay, just don’t start, build your own hardware, look at it, look what’s there. And of course, we have experience with a lot of these devices and we’ve highlighted them. And of course, we also know as a company, which ones are higher quarter quality, and which are of lesser quality. But this abundance of availability make sure that you can choose, and also make sure there’s a market. Second, if you wanna move into, let’s say custom hardware development, because the sensor is not out there, or because you wanna build up IP or because it’s, I mean, you can think of many reasons. What you now see is that with, in the LoRaWAN ecosystem, there’s a lot of libraries, there’s a lot of tools, a lot of modules, that also makes it easier to build your own hardware. So we’ve started off with an open code initiative called a generic node, where we were offering the ecosystem, that’s a example of how we feel what should be the perfect LoRaWAN device and you can use it for inspiration or we can help you further. 

Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks

Daily Tech Digest - January 28, 2022

12 steps to take when there’s an active adversary on your network

“You need to know when to break the glass. People are afraid to pull that trigger, to reach that mode, because it’s hard to take it back if you do. There’s oversight and costs, and people are afraid to spin it up sometimes,” McMann says. Given that, teams must have good guidelines to know when and how to escalate situations. “That decision point will be unique to each organization, but the escalation path, who to call, when to engage legal, [etc.] should be clearly documented,” says Nick Biasini, head of outreach for Cisco Talos, a threat intelligence organization. That prevents delays that could allow hackers more time to do damage, yet prevent costly responses to minor incidents or false alarms. ... CISOs should be looping in business during the triage process, security leaders say, a point that’s often overlooked during active responses. ... As J. Wolfgang Goerlich, advisory CISO with Cisco Secure, says: “This is a business problem. But in a security breach, a very technical person will be thinking, ‘I have to remediate.’ However, one of the things that CISOs need to remember is that a breach is a business problem not a technical problem. ... "

Innovation will drive the success of NFT gaming, not profit or hype

With so much interest in NFTs, it’s only natural that developers have begun to develop the infrastructure necessary to handle what will undoubtedly become a massive secondary market for these assets. In addition, holders want real tangible benefits to holding NFTs, and in a crowded gaming market, new entrants need to differentiate to survive. 2022 is likely the year NFT games become more mainstream, especially now that many crypto investors own these assets. And real innovation, not just in NFTs but in gameplay and mechanics themselves, will be the driving force. While NFT gaming gives gamers a way to earn while playing their favorite games, the industry lacks a social component. The advantage of owning an NFT asset is that it’s yours, and you should be able to use that asset where you want. Here are three innovations that are driving the success of NFT gaming today. It’s no secret that Virtual Reality (VR) and Augmented Reality (AR) are the future of gaming. We got a taste of this tech with Pok√©mon Go, but that was merely a herald of things to come.

Three Factors That Help Cost Optimise Cloud Services

The problem with prioritizing coverage is that not all commitments offer the same amount of savings. Many of the “safest” promises with the most flexibility produce less than a third of the savings rate of a commitment with less flexibility. This can result in circumstances where the coverage is high but the savings rate is low. Companies that are not growing may find themselves in a situation where they have limited options for increasing their savings rate and must just wait for the contract terms to end. When combined with percentage savings, commitment coverage provides a better picture of the net cost reduction that the commitment strategy is driving. This is especially significant when teams are comparing alternative purchasing strategies to see if better coverage actually saves the most money. ... Typically, the highest discount is obtained by making all advance payments, while the lowest is obtained by making no upfront payments. Vendors frequently take and encourage the technique of using only one level of advance payment across several contracts.

3 Strategies for Securing the Supply Chain, Security’s Weakest Link

With whom does your organization have contracts? Whom do you pay to help with day-to-day operations? Particularly for large organizations, this can be a wildly complex proposition. There will be primary providers—who are billing you for services —and secondary and tertiary providers. There will also be upstream and downstream providers, making it critical to research and uncover every single organization with which you do business. Once you’ve created a list of providers, the next step is to prioritize them. Which providers have a direct impact on users or customers? What products do they support? What business processes do they support? How important are they to your mission or your bottom line? Consider if you have any “concentration risks”—does any part of your supply chain rely on only one or two providers? This may be a risk factor. Once partners are prioritized, consider how your organization wants to work with each one. Do you want contractual agreements with each partner? Contracts can help set and manage expectations, help your organization understand the risk profile of your partners, and—just as importantly—map out your organization’s security requirements.

Demystifying machine-learning systems

MIT researchers have now developed a method that sheds some light on the inner workings of black box neural networks. Modeled off the human brain, neural networks are arranged into layers of interconnected nodes, or “neurons,” that process data. The new system can automatically produce descriptions of those individual neurons, generated in English or another natural language. For instance, in a neural network trained to recognize animals in images, their method might describe a certain neuron as detecting ears of foxes. Their scalable technique is able to generate more accurate and specific descriptions for individual neurons than other methods. In a new paper, the team shows that this method can be used to audit a neural network to determine what it has learned, or even edit a network by identifying and then switching off unhelpful or incorrect neurons. “We wanted to create a method where a machine-learning practitioner can give this system their model and it will tell them everything it knows about that model, from the perspective of the model’s neurons, in language.

How should DeFi be regulated? A European approach to decentralization

DeFi protocols are dependent on the blockchains on which they are built, and blockchains can experience attacks (known as "51% attacks"), bugs and network congestion problems that slow down transactions, making them more costly or even impossible. The DeFi protocols, themselves, are also the target of cyberattacks, such as the exploitation of a protocol-specific bug. Some attacks are at the intersection of technology and finance. These attacks are carried out through "flash loans." These are loans of tokens without collateral that can then be used to influence the price of the tokens and make a profit, before quickly repaying the loan. ... The cryptocurrency market is very volatile and a rapid price drop can occur. Liquidity can run out if everyone withdraws their cryptocurrencies from liquidity pools at the same time (a "bank run" scenario). Some malicious developers of DeFi protocols have "back doors" that allow them to appropriate the tokens locked in the smart contracts and thus steal from users (this phenomenon is called "rug-pull").

Social commerce has a bright future – but not on social media

Social commerce, or livestream shopping, is forecast to transform social media into one big shopping channel. The whole buying experience, from initial product discovery to check-out, will take place on social media, with the consumer never stepping out of the app. A lot of serious players agree this will happen. TikTok has launched its shopping facility in the UK, stating: “E-commerce is a big opportunity for TikTok, and it’s something we’re investing in significantly. We think it’s a really significant moment.” It also stated that its internal data shows that one in four TikTokers either research a product or make a purchase after watching a video mentioning a product. More consumers are shopping on social media platforms like Facebook, which could end up benefiting smaller brands. Accenture predicts social commerce will be worth $1.2 trillion by 2025, growing three times faster than traditional e-commerce. It also claims that by 2025, Gen Z will be the second largest set of social commerce users (29% of all expenditure), followed by Gen X, which will account for 28% and Baby Boomers only 10%. As a result, social will comprise no less than 17% of all e-commerce sales by then, too.

Why we can’t put all our trust into AI

The root of the problem is that cybersecurity is hard. For a hard problem what better solution then a magic box which produces the answers? Unfortunately (or fortunately) people still need to be involved in this. Relying solely on the black box will produce a false sense of security which can have disastrous effects. The way forward is a combination of humans and AI working together, utilizing their strengths. AI can do a lot of the heavy lifting, repetitive tasks, and spotting flaws in vast amounts of data, but humans are able to narrow down the important issues quickly and act. We tend to downplay the capabilities of people, but the more research investigates this the more we find how complex our brains are, and all the amazing stuff they can do. Self-driving cars are the classic example. Think of what goes on when driving a car – the motor skills required to steer and work the pedals, and the massive amounts of info being consumed and analyzed quickly by your senses: dashboard info, passenger info, other car info, keeping an eye on the weather, looking at the road, watching behind you, and finally using your instincts to determine when something just “doesn’t feel right”.

Productive Downtime: A New Productivity Method to Implement in 2022

We are all so busy nowadays. Always on the go, constantly checking our phones and email, trying to get things done. But what if we told you that one of the most productive things you can do is actually nothing? It may sound counterintuitive, but it is, nevertheless, true. Studies have shown that taking regular breaks and spending time doing nothing can actually help improve your productivity and creativity. In fact, some experts refer to this as the “divine art of doing nothing.” So how do you go about doing nothing? Pretty simply, it turns out. Just take a few minutes every day to relax and de-stress. Unplug from your devices, close your eyes, and focus on your breath. You can also try some simple meditation or visualization exercises. Or if you’d rather, just take a walk in nature or listen to calming music. The key is to find what works best for you and make time for it in your schedule. And if you’re sorting out deadlines for employees, factor in their need for a little productive downtime in the schedule. If you can find an hour or two each week to relax and rejuvenate, you’ll be much more productive during the rest of your day. 

How the CIO Role Will Evolve In 2022

In the coming year, one in 10 tech execs will get their performance tracked on revenue, according to Forrester projections. Right now, not many people in the technological world are holding revenue focused positions. Instead, the performance of top technology executives is measured based on their revenue sources, rather than the dollar value. With an accelerated convergence between technology leaders and the business stakeholders, leadership executives are likely to take on explicit revenue targets in the coming year. Companies that strive to integrate technology as closely as possible with business, believe that this strategy will help advance that alignment. Over the years , the CIO qualifications were seen as back-end technology service operations. But this year it will be more of a close advisor on business strategies or operations, 2022 will show that CIOs can advise and execute more widely in many areas of business.

Quote for the day:

"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis

Daily Tech Digest - January 27, 2022

The metaverse: Where we are and where we’re headed

The underpinnings of the metaverse have already taken the gaming industry by storm, because gaming is where virtual experiences have been the most immersive. In fact, there’s almost a separate conversation happening in gaming, where virtual interaction and things like NFTs and cryptocurrencies are spawning a creator and gamer economy that hasn’t yet impacted the enterprise. Bitter rivalries exist in gaming, as evidenced by the legal battle between Apple, which wants to charge 30% for access to its app store, and game maker Epic, which needs to access the iPhone because it’s such a compelling format for gaming but refuses to pay that tax. Many gamers dream of a connected network of always-on 3D virtual worlds where you can port your gaming profile anywhere. But that’s not going to happen anytime soon, given that virtual spaces are owned by different companies. And besides, a cross-gaming metaverse doesn’t fully encapsulate the metaverse’s full potential – the one that will transform just about every industry. While forecasting the exact form of the coming metaverse is impossible, the seeds are being sown today.

Best Practices: 5 Risks To Assess for Secure CI Pipeline

If you’re an experienced software engineer or security professional, you’ve probably heard of API keys leaking from public code repositories. Maybe you’ve even experienced your own secrets getting leaked after accidentally committing them to an open-source project. Depending on the type of secret that was leaked, it could end up being a costly mistake. The best way to protect your secrets is to practice good secret management. A good start is to use secret management tools like Azure Key Vault or Amazon KWS that provide secure storage and identity-based access (learn more here). Using GitHub’s built-in repository secrets manager also works well depending on your use case, but it isn’t as feature-rich as a true key management service. Another must-have for secret management is a tool that can tell you right away if you accidentally commit a secret to your codebase. There are some different options out there, but secret detection is GitGuardian ‘s specialty. It has hundreds of built-in secret detectors and is free for open-source projects. Knowing right when you accidentally expose your secrets is crucial in protecting yourself and your code.

How hybrid working is impacting operational efficiency

The advent of the Omicron variant has been a watershed moment in the story of the pandemic, cementing the idea that restrictions, changed habits, and new workplace practices such as hybrid working aren’t going away any time soon. This realisation has clearly been felt by Google, which is – according to recent reports – in the process of investing £730 million into a reinvigorated work environment that places a heavy emphasis on hybrid working culture. Google’s multi-million-pound purchase and refurbishment of Central Saint Giles will include inclusive meeting rooms for the purposes of hybrid working, in addition to more spacious and partially outdoor areas that have clearly been inspired by pandemic life. By proactively investing in this kind of undertaking, Google is leading the way in looking beyond discussions of whether hybrid working is [or is not] a practice worth pursuing, choosing instead to focus on the workplace practicalities of hybridity. These practicalities demand urgent and comprehensive thought in order for organisations to improve, discover, or regain satisfactory levels of operational efficiency which, as we have found in our clients, may have been eroded due to hybrid working.

Decentralized Web3 Computing Is Key To Scaling The Metaverse

Because metaverses are digital worlds where users interact with each other and software programs in a three-dimensional space, they are also complex systems that require copious computing resources to run their 3D worlds and advanced AI algorithms. Their collection of interconnected applications and services will allow users to freely move between cross-chain visual worlds, requiring highly-distributed and powerful compute power for reliability. The metaverse will also be a critical piece of Web3, providing users with access to blockchain-based applications and services as well as new decentralized applications (dApps) to be built that were never possible before. Fortunately, this infrastructure for a powerful, secure, and scalable computing cloud based on blockchain is already in place and embedded into modern microprocessors. The foundation of this infrastructure is called a trusted execution environment or TEE-based privacy technology. TEE is a secure area of a microprocessor that can provide confidential and isolated application execution while creating a blockchain compute 

Data Quality, Data Stewardship, Data Governance: Three Keys

Quality cannot be measured or improved if definitions and rules aren’t clear, if valid values aren’t clear, if the context is missing, or if there’s no shared understanding of what quality data is.Hopper showed a record with multiple unlabeled fields, illustrating the value of context in understanding data, as well as the importance of consistent terminology shared by business users. “Field 1” just isn’t enough, she said. ... A steward works to protect a valuable resource and ensures the health and sustainability of that resource, she said. ... A steward provides ongoing monitoring and maintenance of data assets, and in most organizations, they focus on quality, because in the end that quality translates into usability, she said. ... Many organizations are unclear about whether data stewards live in the business or in IT, but it’s important to understand that there are different types of data stewards. Some are more business-focused, working with business terms, definitions, and rules, and so they become the go-to person to help business users with a quality issue.

Digital IDs under attack: How to tackle the threat?

A key objective of the eIDAS regulation is to secure electronic identification and authentication in cross-borders online services offered within Member States. Today’s publications support the achievement of this objective of the regulation. In addition, the regulation also addresses identity proofing in the different contexts where trust in digital identities is necessary and elaborates on qualified certificates to allow for other identification methods. The area of identification has seen a new trend emerge over the past few years in the self-sovereign identity technologies also referred to as SSI. The report explains what these technologies are and explores their potential to achieve greater control of users over their identities and data, cross-border interoperability, mutual recognition and technology neutrality as required by the eIDAS regulation. The report on remote identity proofing builds on the previous report Remote ID Proofing of ENISA, which makes an analysis of the different methods used to carry out identity proofing remotely. The new report analyses the different types of face recognition attacks and suggests countermeasures.

Report: Access Broker Exploiting VMware Log4j Vulnerability

The BlackBerry researchers say attackers most commonly use encoded PowerShell commands to download a second-stage payload to victimized systems, after using the Log4j flaw to first gain access. They warn that in some cases, the threat actors also attempted to use the curl.exe binary file to download additional files to the system - and attempted to execute the downloaded content using the Windows Subsystem for Linux bash utility. They say multiple cryptominers were identified after successful exploitation - and in one case, PowerShell was used to download and execute the "xms.ps1" file containing a cryptominer. The researchers say the script then created a Scheduled Task to establish persistence and to store command-and-control and wallet configurations. The cybersecurity firm also "discovered instances where a webshell file was injected into absg-worker.js, and the VMBlastSG service restarted to allow for connections to the webshell." BlackBerry also calls the threat actors in these cases "tidy" - citing cleanup actions taken following miner installation.

IT leadership: 3 practices to let go of in 2022

Historically, IT problems have been addressed in a reactive manner. A help desk ticket arrives, and an MSP then initiates an investigation into the issue. That methodology is akin to finding a needle in a haystack, especially if it is a global or regional issue. It requires going into your in-house server to backtrack the issue, resulting in lost productivity and excess effort on the part of MSPs to find and resolve the problem. A cloud-based solution eliminates manual exploration and remediation of help desk issues. Many offer alert prioritization features, enabling IT to clearly see the most urgent issues and address them in a more proactive and efficient way. If there are multiple outages in multiple locations, these solutions allow MSPs to triage issues. That’s more, cloud-based services can be designated for hybrid, public, or private hosting. This eliminates the need for antiquated in-house servers, which are vulnerable to system crashes and lost data as well as costly repairs and maintenance. 

Getting proactive about reactance

As the return-to-work conundrum suggests, reactance isn’t triggered by change per se. It is triggered when change bumps up against established norms, beliefs, or expectations, as is often the case with corporate change initiatives—which may help explain why failure rates for such programs are usually pegged at around 70%. “If people have a structured belief and you try to change that belief, that is a moment when people are very inclined to feel reactance. The stronger that belief, the stronger the pushback,” Nordgren said. The natural inclination in such cases is to respond to reactance by making a more strident case for change and bolstering it with plenty of evidence. The problem with this approach, as seen time and again in the last couple of years, is that it raises the pressure to change, which in turn, creates a reactance flywheel. “To me, this is one of the most important ideas around reactance,” Nordgren explained. “If you believe in climate change and you’re dealing with someone who does not, or if you believe in vaccines and you’re dealing with someone who does not, the more evidence you throw at them, the more they fight against it. ...”

How the Financial Times Approaches Engineering Enablement

Teams at the FT have a lot of autonomy, within certain boundaries. The boundaries generally are where you want to make a change that has an impact outside your team, for example, when you want to introduce a new tool but something is already available, or where we get a lot of benefit as a department from having a single approach. ... Similarly, if you want to start shipping logs somewhere different, that has an impact on people’s ability to look at all the logs for one event in a single location, which can be important during an incident. Sometimes, teams need something for which there isn’t a current solution, and then they can generally try something out. For a completely new vendor, teams need to go through a multi-step procurement process - but teams can go through a shorter process while they are doing evaluation, provided they aren’t planning to do something risky like send PII data to the vendor. Teams do use their autonomy. They make decisions about their own architecture, their own libraries and frameworks.

Quote for the day:

"A true dreamer is one who knows how to navigate in the dark." -- John Paul Warren

Daily Tech Digest - January 26, 2022

Science Made Simple: What Is Exascale Computing?

Exascale computing is unimaginably faster than that. “Exa” means 18 zeros. That means an exascale computer can perform more than 1,000,000,000,000,000,000 FLOPS, or 1 exaFLOP. That is more than one million times faster than ASCI Red’s peak performance in 1996. Building a computer this powerful isn’t easy. When scientists started thinking seriously about exascale computers, they predicted these computers might need as much energy as up to 50 homes would use. That figure has been slashed, thanks to ongoing research with computer vendors. Scientists also need ways to ensure exascale computers are reliable, despite the huge number of components they contain. In addition, they must find ways to move data between processors and storage fast enough to prevent slowdowns. Why do we need exascale computers? The challenges facing our world and the most complex scientific research questions need more and more computer power to solve. Exascale supercomputers will allow scientists to create more realistic Earth system and climate models. 

What CISA Incident Response Playbooks Mean for Your Organization

Most of the time, organizations struggle to exercise their incident response and vulnerability management plans. An organization can have the best playbook out there, but if it doesn’t exercise it on a regular basis, well, ‘If you don’t use it, you lose it’. It needs to make sure that its playbooks have the proper scope so that everyone from executives to everyone else within the organization knows what they need to know… When I say ‘exercise’, it’s important that organizations test their plans under realistic conditions. I’m not saying they need to unplug a device or bring in simulated bad code. They just need to make sure everyone tasked in the playbook knows what’s going on, understands what their roles are and periodically tests the plans. They can take the lessons they’ve learned and refine them. Incident response exercises don’t end with victory. They end with lessons for the future. Ultimately, documents that sit on a shelf rarely get read. To be high-performing, industry, government and critical infrastructure organizations need to continue to test their technology, processes and people.

Is Remix JS the Next Framework for You?

While the concept of a route is not new in any web framework really, the definition of one begins in Remix by creating the file that will contain its handler function. As long as you define the file inside the right folder, the framework will automatically create the route for you. And to define the right handler function, all you have to remember, is to export it as a default export. ... For static content, the above code snippet is fantastic, but if you’re looking to create a web application, you’ll need some dynamic behavior. And that is where Loaders and Actions come into play. Both are functions that if you export them, the handler will execute before its actual code. These functions receive multiple parameters, including the HTTP request and the URLs params and payloads. The loader function is specifically called for GET verbs on routes and it’s used to get data from a particular source (i.e reading from disk, querying a database, etc). The function gets executed by Remix, but you can access the results by calling the useLoaderData function. 

3 Fintech Trends of 2022 as seen by the legal industry

User consent is the foundation of open banking, whilst transparency as to where their data goes and who it is shared with is a necessary pre-condition of customer trust. The fintech sector should avoid following in the footsteps of the ad-tech industry, where entire ecosystems were built with a disregard for individuals’ rights and badly worded consent requests. Here, data collected by tracking technologies sunk into the ad-tech ecosystems without a trace, leaving privacy notices so confusing and complex that even seasoned data protection lawyers struggled to understand them. The full potential of open banking can only happen if financial ecosystems are built on transparency which gives users control over who can access their financial data and how it can be used. ... Innovative fintech solutions will need to strike the right balance between the need for regulatory compliance regarding consent, authentications, security and transparency on the one hand, and seamless user experience on the other, in particular when more complex ecosystems and relationships between various products start emerging.

Short-Sightedness Is Failing Data Governance; a Paradigm Shift Can Rectify It

“While organisations understand that data governance is important, many in the region feel that they have invested enough. And that's why data governance implementations are failing because it's still seen largely as an expense,” says Budge in an exclusive interview with Data & Storage Asean. “There's no doubt that it is a significant expense but rightly so, given that so much of digital transformation success is hinged on the proper deployment and consistent execution of a data governance program. Essentially, data governance is not a one-off investment—something you build and walk away—but requires actual ongoing practice and oversight.” Budge adds: “Executives often see only the upfront costs. For the short-sighted, the costs alone are reason enough to curtail further investment. ...” This short-sightedness, though, is not the only reason data governance is largely failing. Another pain point is what Budge describes as “the lack of understanding of the importance of a sound data governance strategy and the value that it can drive.”

Meta is developing a record-breaking supercomputer to power the metaverse

According to Meta, realizing the benefits of self-supervised learning and transformer-based models requires various domains — whether vision, speech, language, or for critical applications like identifying harmful content. AI at Meta’s scale will require massively powerful computing solutions capable of instantly analyzing ever-increasing amounts of data. Meta’s RSC is a breakthrough in supercomputing that will lead to new technologies and customer experiences enabled by AI, said Lee. “Scale is important here in multiple ways,” said Lee. ... “Secondly, AI projects depend on large volumes of data — with more varied and complete data sets providing better results. Thirdly, all of this infrastructure has to be managed at the end of the day, and so space and power efficiency and simplicity of management at scale is critical as well. Each of these elements is equally important, whether in a more traditional enterprise project or operating at Meta’s scale,” Lee said.

How AI Will Impact Your Daily Life In The 2020s

Every single sector of the economy will be transformed by AI and 5G in the next few years. Autonomous vehicles may result in reduced demand for cars and car parking spaces within towns and cities will be freed up for other usage. It maybe that people will not own a car and rather opt to pay a fee for a car pooling or ride share option whereby an autonomous vehicle will pick them up take them to work or shopping and then rather than have the vehicle remain stationary in a car park, the same vehicle will move onto its next customer journey. The interior of the car will use AR with Holographic technologies to provide an immersive and personalised experience using AI to provide targeted and location-based marketing to support local stores and restaurants. Machine to machine communication will be a reality with computers on board vehicles exchanging braking, speed, location and other relevant road data with each other and techniques such as multi-agent Deep Reinforcement Learning may be used to optimise the decision making by the autonomous vehicles.

My New Big Brain Way To Handle Big Data Creatively In Julia

In 2022, 8-gigabytes of memory is quite a low amount, but usually this is not such a big hindrance until the point where it would likely be a big hindrance for someone else. Really what has happened is that Julia has spoiled us. I know that I can pass fifty-million observations through something with no questions, comments, or concerns from my processor in Julia, no problem. It is the memory, however, that I am often running into the limits of. That being said, I wanted to explore some ideas on decomposing an entire feature’s observations into a “ canonical form” of sorts, and I started researching precisely those topics. My findings in regards to ways to preserve memory have been pretty cool, so I thought it might be an interesting read to look at what all I have learned, and additionally a pretty nice idea I came up with myself. All of this code is part of my project, OddFrames.jl, which is just a Dataframes.jl alternative with more features, and I am almost ready to publish this package.

Four Trends For Cellular IoT In 2022

No-code applied to cellular IoT management is an alternative to API, an accessible route to automation for non-developer teams. According to Gartner, 41% of employees outside the IT function are customizing or building data or technology solutions. The interest and willingness are there. The tools increasingly so. Automation tools enable teams with minimal to no hand-coding experience to automate workflows that would previously wait in a backlog for the attention of a specialist developer. IoT needs scale; there can be no hold-ups or bottlenecks in bringing projects to completion. Applying the benefits of no-code to cellular IoT addresses this. There will always be a high demand for skilled software developers to tackle complex development projects. The transition to the cloud did not drop system administrators, and no-code solutions will not replace specialist software developers; development ability is still needed. The no-code opportunity is in repetitive tasks such as activating an IoT SIM card. Using no-code, this workflow can easily be automated and free up developer resources for more complex integrations.

Web3 – but Who’s Counting?

Regardless of the technology that eventually supports Web3, the key will be distribution; data can’t be trapped in a single place. Let me give you an example: data.world may seem like a Web 2.0 application. It’s collaborative, users generate content in the form of data and analysis, which can be loaded into our servers. That can feel like handing over control. However, unlike the case for today’s data brokers — Facebook, Amazon, etc. — you didn’t give up rights to your data; it is still yours to modify, restrict, or even delete at your discretion. More technically, data.world is built on the Semantic Web standards. This means that if you don’t want your data hosted by data.world, that’s just fine. Host it under some other SPARQL endpoint, give data.world a pointer to your data, and it will behave just the same as if it were hosted with us. Deny access to that endpoint — or just remove it — and it’s gone. This is not to say that data.world is the solution to Web3, here today; far from it. We still don’t really know what Web3 will turn out to be. But one thing is for certain — any Web3 platform will have to play in a world of distributed data.

Quote for the day:

"Small disciplines repeated with consistency every day lead to great achievements gained slowly over time." -- John C. Maxwell