Daily Tech Digest - June 30, 2021

DigitalOcean aligns with MongoDB for managed database service

There is, of course, no shortage of DBaaS options these days. DigitalOcean is betting that its Managed MongoDB service will not only extend the appeal of its cloud service to developers, but also to SMBs that are looking for less costly alternatives to the three major cloud service providers, Cooks said. MongoDB already has a strong focus on developers who prefer to download an open source database to build their applications. In addition to not having to pay an upfront licensing fees, in many cases developers don’t need permission from a centralized IT function to download a database. However, once that application is deployed in a production environment, some person or entity will have to manage the database. That creates the need for the DBaaS platform from MongoDB that DigitalOcean is now reselling as an OEM partner, said Alan Chhabra, senior vice president for worldwide partners at MongoDB. The DigitalOcean Managed MongoDB service is an extension of an existing relationship between the two companies that takes managed database services to the next logical level, Chhabra asserted. “We have a long-standing relationship,” he said.


Digital transformation at SKF through data driven manufacturing approach using Azure Arc enabled SQL

As SKF looked for a solution that supported their data-driven manufacturing vision for the Factories of the Future, they wanted a solution that was able to support distributed innovation and development, high availability, scalability and ease of deployment. They wanted each of their factories to be able to collect, process, analyze data to make real-time decisions autonomously while being managed centrally. At the same time, they had constraints of data latency, data resiliency and data sovereignty for critical production systems that could not be compromised. The drivers behind adopting a hybrid cloud model came from factories having to meet customer performance requirements, many of which depend on ability to analyze and synthesize the data. Recently, the Data Analytics paradigms have shifted from Big Data Analysis in the cloud to more Data-Driven Manufacturing at the machine, production line and factory edge. Adopting cloud native operating models but in such capacity where they can execute workloads physically on-premises at their factories turned out to be the right choice for SKF.


A new dawn for enterprise automation – from long-term strategy to an operational imperative

To drive sustainable change, organisations need to take a large-scale, end-to-end strategic approach to implementing enterprise automation solutions. On one level, this is a vital step to avoid any future architecture problems. Businesses need to spend time assessing their technology needs and scoping out how technology can deliver value to their organisation. Take, for example, low code options like Drag and Drop tools. This in vogue technology is viewed by companies as an attractive, low-cost option to create intuitive interfaces for internal apps that gather employee data – as part of a broad automation architecture. The issue is lots of firms rush the process, failing to account for functionality problems that regularly occur when integrating into existing, often disparate systems. It is here where strategic planning comes into its own, ensuring firms take the time to get the UX to the high standard required, as well as identify how to deploy analytics or automation orchestration solutions to bridge these gaps, and successfully deliver automation. With this strategic mindset, there is a huge opportunity for businesses to use this thriving market for automation to empower more innovation from within the enterprise.


The Rise Of NFT Into An Emerging Digital Asset Class

The nature of NFTs being unique, irreplaceable, immutable, and non-fungible makes them an attractive asset for investors and creators alike. NFTs have empowered creators to monetize and value their digital content, be it music, videos, memes, or art on decentralized marketplaces, without having to go through the hassles that a modern-day creator typically goes through. NFTs, at their core, are digital assets representing real-world objects. ... NFTs solve the age-old problems that creators like you and I have always faced when protecting our intellectual property from being reproduced or distributed across the internet. The most popular standard for NFTs today are ERC-721 and ERC-1155. ERC-721 has been used in a majority of early NFTs until ERC-1155 was introduced. With that said, these token standards have laid the foundation for assets that are programmable and modifiable; therefore, setting the cornerstone for digital ownership leading to all sorts of revolutionary possibilities. The NFT ecosystem has found its way into various industries as more people join hands and dive deeper into its novel possibilities. 


Three Principles for Selecting Machine Learning Platforms

Of the challenges this company faced from its previous data management system, the most complex and risky was in data security and governance. The teams managing data access were Database Admins, familiar with table-based access. But the data scientists needed to export datasets from these governed tables to get data into modern ML tools. The security concerns and ambiguity from this disconnect resulted in months of delays whenever data scientists needed access to new data sources. These pain points led them towards selecting a more unified platform that allowed DS & ML tools to access data under the same governance model used by data engineers and database admins. Data scientists were able to load large datasets into Pandas and PySpark dataframes easily, and database admins could restrict data access based on user identity and prevent data exfiltration. ... A data platform must simplify collaboration between data engineering and DS & ML teams, beyond the mechanics of data access discussed in the previous section. Common barriers are caused by these two groups using disconnected platforms for compute and deployment, data processing and governance.


Introduction To AutoInt: Automatic Integration For Fast Neural Volume Rendering

AutoInt, also known as Automatic integration, is a modern image rendering library used for high volume rendering using deep neural networks. It is used to learn closed-form solutions to an image volume rendering equation, an integral equation that accumulates transmittance and emittance along rays to render an image. While conventional neural renderers require hundreds of samples along each ray to evaluate such integrals and require hundreds of costly forward passes through a network, AutoInt allows evaluating these integrals with far fewer forward passes. For training, it first instantiates the computational graph corresponding to the derivative of the coordinate-based network. The graph is then fitted to the signal to integrate. After optimization, it reassembles the graph to obtain a network that represents the antiderivative. Using the fundamental theorem of calculus enables the calculation of any definite integral in two evaluations of the network. By applying such an approach to neural image rendering, the tradeoff between rendering speed and image quality is improved on a greater scale, in turn improving render times by greater than 10× with a tradeoff of slightly reduced image quality.


How Google is Using Artificial Intelligence?

In the old times, we were much dependent on the paper map or the suggestions of people well-versed with the tracks of our destinations. But with that, the problem was we never reached on time to our spots. Now, you need not seek such suggestions from the people or a paper Map as Google Maps has solved the related difficulties. With territories and over 220 countries like Delhi, the United States, Pakistan, Australia, etcetera one can affordably reach the places already decided. You may curiously ask about the technology embedded and the answer for this is Artificial Intelligence. The main concept is global localization which is relying on AI. This helps Google Maps understand your current or futuristic orientation. Later, it lets the application precisely spot your longitudinal and latitudinal extent and as you or your vehicle proceed further, Google Maps starts localizing hundreds of trillions of street views. As you keep on traversing, the application announces a series of suggestions thereby helping you reach a shopping mall, airport, or other transit stations. Apart from this, you can prepare a list of places you will visit, set routing options as per your preferences, explore the Street View option in Live mode, and so on. 


What is edge computing and why does it matter?

There are as many different edge use cases as there are users – everyone’s arrangement will be different – but several industries have been particularly at the forefront of edge computing. Manufacturers and heavy industry use edge hardware as an enabler for delay-intolerant applications, keeping the processing power for things like automated coordination of heavy machinery on a factory floor close to where it’s needed. The edge also provides a way for those companies to integrate IoT applications like predictive maintenance close to the machines. Similarly, agricultural users can use edge computing as a collection layer for data from a wide range of connected devices, including soil and temperature sensors, combines and tractors, and more. The hardware required for different types of deployment will differ substantially. ... Connected agriculture users, by contrast, will still require a rugged edge device to cope with outdoor deployment, but the connectivity piece could look quite different – low-latency might still be a requirement for coordinating the movement of heavy equipment, but environmental sensors are likely to have both higher range and lower data requirements – an LP-WAN connection, Sigfox or the like could be the best choice there.


Artificial Intelligence (AI): 4 novel ways to build talent in-house

To discover the gems hidden across your organization, you must start maintaining a self-identified list of skills for every employee. The list must be updated every six months and be openly searchable by associates to make it useful and usable. Palmer recommends self-classifying each individual’s skills into four categories: expert, functioning, novice, and desired stretch assignment. This allows teams with hiring needs to scout for individuals with ready skills and those with growth aspirations in the five competencies needed for AI. Finding the right content to upskill your in-house teams is a challenge. Despite the rapid mushrooming of training portals and MOOCs (massive open online courses), the curriculums may not meet your organization’s specific needs. However, with access to such great content online, often for free, it may not make sense to recreate your content. “You must design your own curriculum by curating content from multiple online sources,” says Wendy Zhang, director of data governance and data strategy at Sallie Mae. Base the training plan on your team’s background, roles, and what they need to succeed. 


Solving Mysteries Faster With Observability

Let's start by looking at the sources that we turn to when we look for clues. We often begin with observability tooling. Logs, metrics, and traces are the three pillars of observability. Logs give a richly detailed view of an individual service and provide the service a chance to speak its own piece about what went right or what went wrong as it tried to execute its given task. Next, we have metrics. Metrics indicate how the system or subsets of the system, like services, are performing at a macro scale. Do you see a high error rate somewhere, perhaps in a particular service or region? Metrics give you a bird's eye view. Then we have traces, which follow individual requests through a system, illustrating the holistic ecosystem that our request passes through. In addition to observability tooling, we also turn to metadata. By metadata, I mean supplemental data that helps us build context. For us at Netflix, this might be, what movie or what show was a user trying to watch? What type of device were they using? Or details about the build number, their account preferences, or even what country they're watching from. Metadata helps add more color to the picture that we're trying to draw.



Quote for the day:

"A sense of humor is part of the art of leadership, of getting along with people, of getting things done." -- Dwight D. Eisenhower

Daily Tech Digest - June 29, 2021

How to eliminate ransomware risk, not just manage it

There are a multitude of solutions available, all of which are designed to reduce risk and protect specific areas of the network. However, there is one method that is rising in popularity and has proven to be highly effective. Zero Trust approaches to security are being applied by organisations on a daily basis, developed on the grounds that trust should never be given out superfluously – transitioning from “Trust but Verify” to “Verify, then Trust”. Forrester recently announced that Zero Trust can reduce an organisation’s risk exposure by 37% or more. This model eliminates automatic access for any asset, whether internal or external. It instead assumes that the context of any action must be validated before it can be allowed to proceed. Another technique that has emerged as being one of the best for protecting businesses from ransomware attacks, and that is closely aligned to the Zero Trust model, is micro-segmentation. Micro-segmentation restricts adversary lateral movement through the network and reduces a company’s attack surface. A strong security perimeter, whilst important, is no longer enough to protect business IT networks from ransomware threats – since it just takes one breach of the perimeter to compromise the network.


How to conquer synthetic identity fraud

The arrival of our truly physical-digital existence has forced identity protection to the forefront of our minds and amplified the need to understand how, through technology, our identities and behavior can be used to equalize and authenticate our access to all of life’s experiences. Second, there’s been an exceptional rise in all types fraud, including synthetic. Tackling this will require an intelligent, coordinated defense against cybercriminals employing new and more sophisticated techniques. Not unlike a police database that tracks criminals in different states, there’s a need for platforms where companies can anonymously share data signatures about bad actors with one another so that fraudulent activity becomes much easier to detect. According to the Aite Group, 72% of financial services firms surveyed believe synthetic identity fraud is a much more pressing issue than identity theft, and the majority plan to make substantive changes in the next two years. With collaboration driving that change, we have seen some cases of increasing synthetic fraud detection by more than 100% and the ability to catch overall forged documents by 8% in certain platforms.


A Look at GitOps for the Modern Enterprise

In the GitOps workflow, the system’s desired configuration is maintained in a source file stored in the git repository with the code itself. The engineer will make changes to the configuration files representing the desired state instead of making changes directly to the system via CLI. Reviewing and approving of such changes can be done through standard processes such as — pull requests, code reviews, and merges to the master branch. When the changes are approved and later merged to the master branch, an operator software process is accountable for switching the system’s current state to the desired state based on the configuration stored in the newly updated source file. In a typical GitOps implementation, manual changes are not allowed, and all changes to the configuration should be done to files put in Git. In a severe case, authority to change the system is given just to the operator software process. In a GitOps model, the infrastructure and operations engineers’ role changes from implementing the infrastructure modifications and application deployments to developing and supporting the automation of GitOps and assisting teams in reviewing and approving changes via Git.


Preventing Transformational Burnout

Participation can be viewed as a strain—it’s a tool that comes in different sizes and models and it is useful. Still, when individuals are forced to participate in anything that doesn’t resonate with their inner motivation, a leader is the one pulling the trigger of burnout. Note that passion is often thought to serve as a band-aid to the individual burnout when there is the perception that, “I care so much I must put all my efforts in the matter.” In situations where management doesn’t wish to share decision-making control with others, where employees or other stakeholders are passive or apathetic (or suffering from individual burnout), or in organizational cultures that take comfort in bureaucracy, pushing participatory efforts may be unwise. Luckily, agile stems from participation and self-organization. As you plan for employee participation in your transformation efforts, it’s important to have realistic expectations. Not all “potential associates” desire to participate and those that do may not yet have the skills to do so productively. As Jean Neumann found in her research on participation in the manufacturing industry, various factors can lead individuals to rationally choose to “not” participate. Neumann further notes, as have others, that participation requires courage.


How AutoML helps to create composite AI?

The most straightforward method for solving the optimization task is a random search for the appropriate block combinations. But a better choice is meta-heuristic optimization algorithms: swarm and evolutionary (genetic) algorithms. But in the case of evolutionary algorithms, one should keep in mind that they should have specially designed crossover, mutation, and selection operators. Such special operators are important for processing the individuals described by a DAG, they also give a possibility to take multiple objective functions into account and include additional procedures to create stable pipelines and avoid overcomplication. The crossover operators can be implemented using subtree crossover schemes. In this case, two parent individuals are chosen and exchange random parts of their graphs. But this is not the only possible way of implementation, there may be more semantically complex variants (e.g., one-point crossover). Implementation of the mutation operators may include random change of a model (or computational block) in a random node of the graph, removal of a random node, or random addition of a subtree.


AI is driving computing to the edge

Companies are adopting edge computing strategies because the cost of sending their ever-increasing piles of data to the cloud — and keeping it there — has become too expensive. Moreover, the time it takes to move data to the cloud, analyze it, and then send an insight back to the original device is too long for many jobs. For example, if a sensor on a factory machine senses an anomaly, the machine’s operator wants to know right away so she can stop the machine (or have a controller stop the machine). Round-trip data transfer to the cloud takes too long. That is why many of the top cloud workloads seen in the slide above involve machine learning or analysis at the edge. Control logic for factories and sensor fusion needs to happen quickly for it to be valuable, whereas data analytics and video processing can generate so much data that sending it and working on that data in the cloud can be expensive. Latency matters in both of those use cases as well. But a couple of other workloads on the slide indicate where the next big challenge in computing will come from. Two of the workloads listed on the slide involve data exchanges between multiple nodes. 


Data-Wiping Attacks Hit Outdated Western Digital Devices

Storage device manufacturer Western Digital warns that two of its network-attached storage devices - the WD My Book Live and WD My Book Live Duo - are vulnerable to being remotely wiped by attackers and now urges users to immediately disconnect them from the internet. ... The underlying flaw in the newly targeted WD devices is designated CVE-2018-18472 and was first publicly disclosed in June 2019. "Western Digital WD My Book Live (all versions) has a root remote command execution bug via shell metacharacters in the /api/1.0/rest/language_configuration language parameter. It can be triggered by anyone who knows the IP address of the affected device," the U.S. National Vulnerability Database noted at the time. Now, it says, the vulnerability is being reviewed in light of the new attacks. "We are reviewing log files which we have received from affected customers to further characterize the attack and the mechanism of access," Western Digital says. "The log files we have reviewed show that the attackers directly connected to the affected My Book Live devices from a variety of IP addresses in different countries. 


IoT For 5G Could Be Next Opportunity

On the consumer front, a technology currently being planned for inclusion in the forthcoming 3GPP Release 17 document called NR Light (or Lite), looks very promising. Essentially functioning as a more robust, 5G network-tied replacement for Bluetooth, NR Light is designed to enable the low latency, high security, and cloud-powered applications of a cellular connection, without the high-power requirements for a full-blown 5G modem. Practically speaking, this means we could see things like AR headsets, that are tethered to a 5G connected smartphone, use NR Light for their cloud connectivity, while being much more power-friendly and battery efficient. Look for more on NR Light in these and other applications that require very low power in 2022. At the opposite end of the spectrum, some carriers are starting the process of “refarming” the radio spectrum they’re currently using to deliver 2G and 3G traffic. In other words, they’re going to shut those networks down in order to reuse those frequencies to deliver more 5G service. The problem is, much of the existing IoT applications are using those older networks, because they’re very well-suited to the lower data rates used by most IoT devices.


Security and automation are top priorities for IT professionals

Organizations across the globe have experienced crippling cyberattacks over the past year that have significantly impacted the global supply chain. Due to the growing number of threats, 61% of respondents said that improving security measures continues to be the dominant priority. Cybersecurity systems topped the list of what IT professionals plan to invest in for 2022, with 53% of respondents planning to budget for email security tools such as phishing prevention, and 33% of respondents investing in ransomware protection. Cloud technologies were also top of mind this year, with 54% saying their IaaS cloud spending will increase and 36% anticipating growth in spending on SaaS applications. Cloud migration was also a high priority for respondents in 2021, which accounted for migrations across PaaS, IaaS and SaaS software. IT professionals also want to increase their productivity through automation, which ranked second in top technologies for investment. Almost half of respondents stated that they will allocate funds for this in 2021.


Making fungible tokens and NFTs safer to use for enterprises

One of the weaknesses of current token exchange systems is the lack of privacy protection they feature beyond a very basic pseudonymization. In Bitcoin, for example, transactions are pseudonymous and reveal the Bitcoin value exchanged. That makes them linkable and traceable, presenting threats that are inadmissible in other settings such as enterprise networks, in a supply chain or in finance. While some newer cryptocurrencies offer a higher degree of privacy, entirely concealing the actual asset exchanged and transaction participants, they retain the permissionless character of Bitcoin and others, which presents challenges on the regulatory compliance side. For enterprise blockchains, a permissioned setting is required, in which the identity of participants issuing and exchanging tokens is concealed, yet non-repudiatable, and transaction participants can be securely identified upon properly authorized requests. A big conundrum in permissioned blockchains exists in accommodating the use of token payment systems while at the same time preserving the privacy of the parties involved and still allowing for auditing functionalities.



Quote for the day:

"The quality of a leader is reflected in the standards they set for themselves." -- Ray Kroc

Daily Tech Digest - June 28, 2021

Is Quantum Supremacy A Threat To The Cryptocurrency Ecosystem?

Quantum computing has always been a potential threat to cryptocurrencies ever since the birth of Bitcoin and its contemporaries. Before we can understand why? Let’s dive into how cryptocurrency transactions work. Let’s take Bitcoin as an example. Bitcoin is a decentralized peer-to-peer system for transferring value. This means that unlike traditional financial institutions that mediate a lot of the processes, Bitcoin users facilitate themselves, such as creating their own addresses. By means of complex algorithms, users calculate a random private key and public address to perform transactions and keep them secure. At the moment, cryptographically secure private keys serve as the only protection for users’ funds. If quantum computers cracked the encryption safeguarding the private keys, then it would likely mark the end for bitcoin and all cryptocurrencies. Not like we want to be the bearer of bad news, but there are already multiple quantum shortcuts that bypass public-key cryptography. Examples like Shor’s algorithm enable the extraction of private keys from any public key.


Will open-source Postgres take over the database market? Experts weigh in

Stalwart IBM is now offering an EDB Postgres solution. IBM’s Villalobos thinks it’s as much an enterprise modernization requirement as anything else that’s driving demand for the open-source, cloud-oriented product. He thinks customers should use IBM’s Rocket cloud (methodology that helps customers appraise all database modernization options). “Not all customers totally understand where they are today,” he said. Analyzing what costs could be freed-up is a part of what IBM offers. Part of what is driving the movement toward EDB and PostgreSQL, is that Postgres has done a good job on getting EDB working for enterprise, according to Villalobos. “Is there any database that can finally replace Oracle in the world,” Young asked. The interoperability and compatibility between Oracle and EDB makes EDB a contender, he believes. Tangible difference, according to Sheikh, include getting environments running more quickly, with “less latency in terms of agility,” and the ability for deployment anywhere being attractive factors for the data scientist. 


The biggest post-pandemic cyber security trends

Corporate culture often brings to mind ping pong tables, free beers on Fridays and bean bags, but what about security and business continuity planning? Well, they are about to become just as important. Cyber security needs to become front of mind for all employees, not just those who work in IT. One way to do this is through “war-gaming” activities, training employees and creating rapid response plans to ensure that everyone is prepared in case of an attack. The roles of the C-suite have been changing for quite some time now, as each role becomes less siloed and the whole company is viewed more as a whole than as separate teams. The evolution of the chief information security officer (CISO) is a great example of this – rather than being a separate role to the chief information officer (CIO) or chief technology officer (CTO), the CISO is now responsible for security, customer retention, business continuity planning and much more. Now, this change needs to spread to the rest of the business so that all employees prioritise security and collaboration, whatever their level and role. The natural result of this is that teams will become more open and better at information sharing which will make it easier to spot when there has been a cyber security issue, as everyone will know what is and isn’t normal across the company.


Why SAFe Hurts

Many transformations begin somewhere after the first turn on the Implementation Roadmap. Agile coaches will often engage after someone has, with the best of intentions, decided to launch an Agile Release Train (ART), but hasn’t understood how to do so successfully. As a result, the first Program Increment, and SAFe, will feel painful. Have you ever seen an ART that is full of handoffs and is unable to deliver anything of value? This pattern emerges when an ART is launched within an existing organizational silo, instead of being organized around the flow of value. When ARTs are launched in this way, the same problems that have existed in the organization for years become more evident and more painful. For this reason, many Agile and SAFe implementations face a reboot at some point. Feeling the pain, an Agile coach will help leaders understand why they’re not getting the expected results. Here’s where organizations will reconsider the first straight of the Implementation Roadmap, find time for training, and re-launch their ARTs. This usually happens after going through a Value Stream Identification and ART Identification workshop to best understand how to organize so that ARTs are able to deliver value.


4 benefits of modernizing enterprise applications

Legacy applications often based on monolithic architecture, are not only difficult but also costly to update and maintain since the entire application’s components are bucketed together. Even if updated, it could result in integration complexities resulting in wastage of time and resources. By modernizing an application to a microservices architecture, components are smaller, loosely coupled, and can be deployed and scaled independently. With cloud architecture growing more complex, IT services providers are expected to deliver the best combination of services to fulfil customer needs. They need to conduct a deep-dive analysis into the customer’s business, technical costs and contribution of each application in the IT portfolio. Based on the functional and technical value that each application brings along, the IT services provider should recommend the most compatible modernization and migration approach. In the process, they will enable rationalization and optimization of the customer’s IT landscape. Thus, only a handful of applications which are both functionally and technically relevant are chosen for migration to the cloud. 


Unlocking the Potential of Blockchain Technology: Decentralized, Secure, and Scalable

Some blockchains select users to add and validate the next block by having them devote computing power to solving cryptographic riddles. That approach has been criticized for being inefficient and energy intensive. Other blockchains give users holding the associated cryptocurrency power to validate new blocks on behalf of everyone else. That approach has been criticized for being too centralized, as relatively few people hold the majority of many cryptocurrencies. Algorand also relies on an associated cryptocurrency to validate new blocks. The company calls the currency Algo coins. Rather than giving the power to validate new blocks to the people with the most coins, however, Algorand has owners of 1,000 tokens out of the 10 billion in circulation randomly select themselves to validate the next block. The tokens are selected in a microsecond-long process that requires relatively little computing power. The random selection also makes the blockchain more secure by giving no clear target to hackers, helping Algorand solve the “trilemma” put forth by the Ethereum founder with a scalable, secure, and decentralized blockchain.


4 Skills Will Set Apart Tomorrow’s Data Scientists

While data science is complex, the widening data literacy gap threatens innovation and collaboration across teams. Accenture found that 75% of employees read data, yet only 21% “are confident with their data literacy skills.” While organizations should invest in data literacy across the entire organization to boost productivity, today’s data scientists should learn how to best communicate the fundamentals behind data. The ability to explain different concepts like variance, standard deviation, and distributions will help data scientists explain how data was collected, what the set of data reveals about that data, and whether it appears valid. These insights are helpful when communicating data to other stakeholders, especially the C-suite. ... The best data scientists are also adept storytellers, providing the necessary context about data sets and explaining why the data is important in the larger picture. When sharing a new set of data or the results of a data project, focus on crafting a narrative on the top three things the audience should walk away with. Reiterate these points throughout whatever medium you choose --- presentation, email, interactive report, etc. -- to move your audience to action.


How to migrate Java workloads to containers: 3 considerations

Containerization and orchestration also overlap two other key trends right now: Application migration and application modernization. Migration typically refers to moving workloads from one environment (today, usually a traditional datacenter) to another (usually a cloud platform.) Modernization, often used as an umbrella term, refers to the various methods of migrating applications to a cloud environment. These run the gamut from leaving the code largely as-is to a significant (or total) overhaul in order to optimize a workload for cloud-native technologies. Red Hat technology evangelist Gordon Haff notes that this spectrum bears out in Red Hat’s enterprise open source research: The 2020 report found a healthy mix of strategies for managing legacy applications, including “leave as-is” (31 percent), “update or modernize” (17 percent), “re-architect as cloud-enabled (16 percent), and “re-architect as cloud-native” (14 percent). “Leave as-is” is self-explanatory. “To whatever large degree containers are the future, you don’t need to move everything there just because,” Haff says. But as Red Hat’s research and other industry data reflect, many organizations are indeed containerizing some workloads. Containers can play a role in any of the other three approaches to legacy applications.


A sector under siege: how the utilities industry can win the war against ransomware

Although the utilities sector is typically resilient to unexpected macro changes, current market conditions have forced every industry to drastically accelerate digital transformation plans. The utilities sector is no exception, which perhaps indicates the sudden shift to the cloud. As a result of this rapid transformation, however, 55% of utilities companies admit that their security measures haven’t kept up with the complexity of their IT infrastructure, meaning they have less visibility and control of their data than ever before. This ‘chink in the armour’ could be their downfall when an attacker strikes. So, it comes as no surprise that there are lingering concerns around cloud security for two-thirds (67%) of utilities companies. Other apprehensions around cloud adoption include reduced data visibility (59%) or risk of downtime (55%). These are valid concerns given that two-thirds (64%) of utilities sector companies admit that their organisation’s approach to dealing with cyber attacks could be improved, while increasing resiliency to ransomware and data governance are among their top three priorities.


Danske Bank’s 360° DevSecOps Evolution at a Glance

The main enablers for this evolution are several actually, and a combination of internal and external factors. From an internal perspective, firstly it was our desire to further engineer and in some cases reverse-engineer our SDLC. We wanted to increase release velocity and shorten time to market, improve developer productivity, improve capabilities interoperability and modernize our technological landscape; in addition, find a pragmatic equilibrium between speed, reliability and security, while shifting the latter two as left as sensible into the SDLC. Last but not least, we wanted to make the silos (business to IT and IT to IT) across the internal DevOps ecosystem collapse by improving collaboration, communication and a coalition on our DevOps vision. From an external perspective, of course competition is a factor. In an era of digital disruption, time to market and reliability of services and products are vital. DevOps practices foster improvements on those quality aspects. In addition, we as a bank are heavily regulated with several requirements and guidelines by the FSA and EBA. 



Quote for the day:

"Leadership is the creation of an environment in which others are able to self-actualize in the process of completing the job." -- John Mellecker

Daily Tech Digest - June 27, 2021

Sparking the next cycle of IT spending

The past cycles were driven by vendors, by a shift in technology that created a new productivity paradigm. In fact, we could fairly say that every past cycle was driven by a single vendor, IBM. They popularized and drove commercial computing in the first cyclic wave. Their transaction processing capability (CICS, for those interested in the nitty gritty) launched the second wave, and their PC the third. Can IBM, or any vendor, pull together the pieces we need? These days, for something like contextual computing, we’re probably reliant on open-source software initiatives, and I’m a believer in them, with a qualification. The best open-source contextual computing solution wouldn’t likely be developed by a community. It would be done by a single player and supported and enhanced by a community. Open-source simplifies the challenge that being the seminal vendor for contextual computing would pose, but some vendor is still going to have to build the framework and make that initial submission. Otherwise we’ll spend years and won’t accomplish anything in the end.


Hackers Crack Pirated Games with Cryptojacking Malware

Dubbed “Crackonosh,” the malware — which has been active since June 2018 — lurks in pirated versions of Grand Theft Auto V, NBA 2K19 and Pro Evolution Soccer 2018 that gamers can download free in forums, according to a report posted online Thursday by researchers at Avast. The name means “mountain spirit” in Czech folklore, a reference to the researchers’ belief that the creators of the malware are from the Czech Republic. Cracked software is a version of commercial software that is often offered for free but often with a catch — the code of the software has been tampered with, typically to insert malware or for some other purpose beneficial to whoever cracked it. In the case of Crackonosh, the aim is to install the coinminer XMRig to mine Monero cryptocurrency from within the cracked software downloaded to an affected device, according to the report. So far, threat actors have reaped more than $2 million, or 9000 XMR in total, from the campaign, researchers said. Crackonosh also appears to be spreading fast, affecting 222,000 unique devices in more than a dozen countries since December 2020. As of May, the malware was still getting about 1,000 hits a day, according to the report.


Cryptocurrency Blockchains Don’t Need To Be Energy Intensive

Blockchain is a generic term for the way most cryptocurrencies record and share their transactions. It’s a type of distributed ledger that parcels up those transactions into chunks called “blocks” and then chains them together cryptographically in a way that makes it incredibly difficult to go back and edit older blocks. How often a new block is made and how much data it contains depends on the implementation. For Bitcoin, that time frame is 10 minutes; for some cryptocurrencies it’s less than a minute. Unlike most ledgers, which rely on a central authority to update records, blockchains are maintained by a decentralized network of volunteers. The ledger is shared publicly, and the responsibility for validating transactions and updating records is shared by the users. That means blockchains need a simple way for users to reach agreement on changes to the ledger, to ensure everyone’s copy of the ledger looks the same and to prevent fraudulent activity. These are known as consensus mechanisms, and they vary between blockchains.


Quantum computers just took on another big challenge.

Classical computers can only offer simplifications and approximations. The Japanese company, therefore, decided to try its hand at quantum technologies, and announced a partnership with quantum software firm CQC last year. "Scheduling at our steel plants is one of the biggest logistical challenges we face, and we are always looking for ways to streamline and improve operations in this area," said Koji Hirano, chief researcher at Nippon Steel. Quantum computers rely on qubits – tiny particles that can take on a special, dual quantum state that enables them to carry out multiple calculations at once. This means, in principle, that the most complex problems that cannot be solved by classical computers in any realistic timeframe could one day be run on quantum computers in a matter of minutes. The technology is still in its infancy: quantum computers can currently only support very few qubits and are not capable of carrying out computations that are useful at a business's scale. Scientists, rather, are interested in demonstrating the theoretical value of the technology, to be prepared to tap into the potential of quantum computers once their development matures.


Why Agile Fails Because of Corporate Culture

In most companies, however, providing creative space and encouraging to fail aren't really core values, are they? Therefore, it's the creation of this awareness which is is a fundamental obstacle and the only real reason why agility fails because of corporate culture. The weirdest fact about all this: The failure of agility in the company is itself a failure from which companies learn nothing. Why? Because failure is not seen as an opportunity for improvement, but as failure. And so it goes on and on; a vicious circle from which companies can only break out through a culture change. This change requires certain skills from all those involved. For some, these skills have yet to be awakened; for others, they're probably present and may only just need to be fostered. But all of those involved will need to be coached, on a long-term basis. Becoming agile is an evolutionary process and it can't just be avoided or saved-upon by taking advantage of the wisdom of others or applying arbitrary agile frameworks from a consultant's shelve to be agile. And let me stress that: Evolution. That means it's ever-ongoing. It's not a one-off big bang thing and whoohoo you're agile. As a C-level guy or gal, as a board member or shareholder it will take loads of guts to go that way.


Ag tech is working to improve farming with the help of AI, IoT, computer vision and more

Harvesting is a great example of AI on the farm. A combine is used to separate grain from the rest of the plant without causing damage to corn kernels. "That separation process is not perfect, and it's impossible for a farmer to see every kernel as it makes its way through the different sections of the machine," Bonefas explained. "AI enables the machine to monitor the separation process and to make decisions based on what it's seeing. If the harvest quality degrades, the AI-enabled system automatically optimizes the combine's settings or recommends new settings to the farmer to achieve more favorable results." In other farming operations, AI uses computer vision and machine learning to detect weeds and precisely spray herbicides only where needed. This results in significant cost reductions for farmers, who in the past used sprays across entire fields, even in areas where sprays were not needed. The weed detection technology works because it integrates AI with IoT devices such as cameras that can take good pictures regardless of weather conditions. Computers process imagery from the cameras and use sophisticated machine learning algorithms that detect the presence of weeds and actuate the required nozzles on sprayers at the exact moment needed to spray the weeds.


Architecture is theory

For organisation versus enterprise, one of the common assumptions is that the two terms are synonyms for each other: that the organisation is the enterprise, that the enterprise is the organisation. In reality, ‘organisation’ and ‘enterprise’ denote two different scope-entities (in essence, ‘how’ versus ‘why’) whose boundaries may coincide – but that special-case is basically useless and dangerously misleading, because it represents a context in which the sole reason for the organisation’s existence is to talk only with itself. The simplest summary here is that the enterprise represents the context, purpose and guiding-story for the respective organisation (or ‘service-in-focus’, to use a more appropriate context-neutral term): we develop an architecture for the organisation, about the enterprise that provides its context. In practice, the scope of enterprise we’d typically need to explore for an enterprise-architecture would be three steps ‘larger’ than the scope for the organisation – the organisation plus its transactions, direct-interactions and indirect-interactions – and also looking ‘inward’ to perhaps the same depth:


Cisco ASA Bug Now Actively Exploited as PoC Drops

In-the-wild XSS attacks have commenced against the security appliance (CVE-2020-3580), as researchers publish exploit code on Twitter. Researchers have dropped a proof-of-concept (PoC) exploit on Twitter for a known cross-site scripting (XSS) vulnerability in the Cisco Adaptive Security Appliance (ASA). The move comes as reports surface of in-the-wild exploitation of the bug. Researchers at Positive Technologies published the PoC for the bug (CVE-2020-3580) on Thursday. One of the researchers there, Mikhail Klyuchnikov, noted that there were a heap of researchers now chasing after an exploit for the bug, which he termed “low-hanging” fruit. ... “Researchers often develop PoCs before reporting a vulnerability to a developer and publishing them allows other researchers to both check their work and potentially dig further and discover other issues,” Claire Tills, senior research engineer at Tenable, told Threatpost. “PoCs can also be used by defenders to develop detections for vulnerabilities. Unfortunately, giving that valuable information to defenders means it can also end up in the hands of attackers.”


How edge will affect enterprise architecture: Aruba explains

Logan reckons IoT at edge growth will be significant as enterprise organizations are now starting to look at the network as a far more important component than they did four or five years ago, “where it might [then] have just been four bars of Wi-Fi or connectivity from branch to headquarters,” Logan said. New requirements on the architectures will be an end result of this shift. It will include data-intensive workloads caused by AI and so on. “We are going to find over the next 10 years that a significant amount of the data that is born at the edge and the experiences that are delivered at the edge need a local presence of computer and communications,” Logan added. As far as enterprise architecture evolving, he uses the example of a healthcare environment, such as a hospital: Patient telemetry has to be collected from the bedside. But “what if the point of patient care is in the patient’s home?” he asked. This is a realistic proposition, as we’ve seen during the pandemic with the escalation of remote doctor care. “That’s a completely different set of circumstances, physically and logically from an enterprise architecture perspective,” Logan said.


Integrating machine learning and blockchain to develop a system to veto the forgeries and provide efficient results in education sector

The advancement of blockchain technology in terms of validation and data security has also been applied in the educational sector, with the validation and security of student data being considered prime aspects. Various systems providing such validation and security have been developed. For example, Li and Wu proposed the idea of flashing the system, through which the counterfeiting of student degrees can be avoided. Such a program has to some degree remedied the defects found in current solutions, making the use of a blockchain-based certificate a more feasible theory. Many types of certificates of success, grades, and diplomas, among other documents, can become a valuable tool in finding a new school or job. Gopal and Prakash proposed a blockchain-based digital certificate scheme based on the immutable properties of blockchain. This scheme preserves the essential data and eliminates the chance of any company with a job offering distrusting a student’s certification. Individual learning records are important for the professional careers of individuals. Certificates support the achievement of learning outcomes in education.



Quote for the day:

"Energy and persistence conquer all things." -- Benjamin Franklin

Daily Tech Digest - June 26, 2021

DevOps requires a modern approach to application security

As software development has picked up speed, organizations have deployed automation to keep up, but many are having trouble working out the security testing aspect of it. Current application security testing tools tend to scan everything all the time, overwhelming and overloading teams with too much information. If you look at all the tools within a CI pipeline, there are tools from multiple vendors, including open-source tools that are able to work separately, but together in an automated fashion while integrating with other systems like ticketing tools. “Application security really needs to make that shift in the same manner to be more more fine-grained, more service-oriented, more modular and more automated,” said Carey. Intelligent orchestration and correlation is a new approach being used to manage security tests, reduce the overwhelming amount of information and let developers focus on what really matters: the application. While the use of orchestration and correlation solutions are not uncommon on the IT operations side for things like network security and runtime security, they are just beginning to cross into the application development and security side of things, Carey explained.


Databricks cofounder’s next act: Shining a Ray on serverless autoscaling

Simply stated, Ray provides an API for building distributed applications. It enables any developer working on a laptop to deploy a model on a serverless environment, where deployment and autoscaling are automated under the covers. It delivers a serverless experience without requiring the developer to sign up for a specific cloud serverless service or know anything about setting up and running such infrastructure. A Ray cluster consists of a head node and a set of worker nodes that can work on any infrastructure, on-premises or in a public cloud. Its capabilities include an autoscaler that introspects pending tasks, and then activates the minimum number of nodes to run them, and monitors execution to ramp up more nodes or close them down. There is some assembly required, however, as the developer needs to register to compute instance types. Ray can start and stop VMs in the cloud of choice; the ray docs provide information about how to do this in each of the major clouds and Kubernetes. One would be forgiven for getting a sense that Ray is déjà vu all over again. Stoica, who was instrumental in fostering Spark's emergence, is taking on a similar role with Ray. 


Akka Serverless is really the first of its kind

Akka Serverless provides a data-centric backend application architecture that can handle the huge volume of data required to support today’s cloud native applications with extremely high performance. The result is a new developer model, providing increased velocity for the business in a highly cost-effective manner leveraging existing developers and serverless cloud infrastructure. Another huge bonus of this new distributed state architecture is that, in the same way as serverless infrastructure offerings allow businesses to not worry about servers, Akka Serverless eliminates the need for databases, caches, and message brokers to be developer-level concerns. ... Developers can express their data structure in code and the way Akka Serverless works makes it very straightforward to think about the “bounded context” and model their services in that way too. With Akka Serverless we tightly integrate the building blocks to build highly scalable and extremely performant services, but we do so in a way that allows developers to write “what” they want to connect to and let the platform handle the “how”. As a best practice you want microservices to communicate asynchronously using message brokers, but you don’t want all developers to have to figure out how to connect to them and interact with them. 


Windows 11 enables security by design from the chip to the cloud

The Trusted Platform Module (TPM) is a chip that is either integrated into your PC’s motherboard or added separately into the CPU. Its purpose is to help protect encryption keys, user credentials, and other sensitive data behind a hardware barrier so that malware and attackers can’t access or tamper with that data. PCs of the future need this modern hardware root-of-trust to help protect from both common and sophisticated attacks like ransomware and more sophisticated attacks from nation-states. Requiring the TPM 2.0 elevates the standard for hardware security by requiring that built-in root-of-trust. TPM 2.0 is a critical building block for providing security with Windows Hello and BitLocker to help customers better protect their identities and data. In addition, for many enterprise customers, TPMs help facilitate Zero Trust security by providing a secure element for attesting to the health of devices. Windows 11 also has out of the box support for Azure-based Microsoft Azure Attestation (MAA) bringing hardware-based Zero Trust to the forefront of security, allowing customers to enforce Zero Trust policies when accessing sensitive resources in the cloud with supported mobile device managements (MDMs) like Intune or on-premises.


Switcheo — Zilliqa bridge will be a game-changer for BUILDers & HODlers!

Currently, a vast majority of blockchains operate in silos. This means that many blockchains can only read, transact, and access data within a singular blockchain. This limits blockchain user experience and hinders user adoption. Without interoperability, we have individual ecosystems where users and developers have to choose which blockchain to interact with. Once they choose a blockchain, they are limited to using its features and offerings. Not the most decentralised environment to build on right? No blockchain should be an island — and working alone doesn’t end well. We need to stay connected to different protocols so ideas, dApps and users can travel across platforms conveniently. With interoperability, users and developers can seamlessly transact with multiple blockchains and benefit from those cross-chain ecosystems, application offerings in areas like decentralised finance (DeFi), gaming, supply chain logistics, etc. The list goes on. Interoperability creates the ability for users and developers to not be stuck having to choose one blockchain over another, but rather, they can benefit from multiple chains being able to interlink.


JSON vs. XML: Is One Really Better Than the Other?

Despite resolving very similar purposes, there are some critical differences between JSON and XML. Distinguishing both can help decide when to opt for one or the other and understand which is the best alternative according to specific needs and goals. First, as previously mentioned, while XML is a markup language, JSON, on the other hand, is a data format. One of the most significant advantages of using JSON is that the file size is smaller; thus, transferring data is faster than XML. Moreover, since JSON is compact and very easy to read, the files look cleaner and more organized without empty tags and data. The simplicity of its structure and minimal syntax makes JSON easier to be used and read by humans. Contrarily, XML is often characterized for its complexity and old-fashioned standard due to the tag structure that makes files bigger and harder to read. However, JSON vs. XML is not entirely a fair comparison. JSON is often wrongly perceived as a substitute for XML, but while JSON is a great choice to make simple data transfers, it does not perform any processing or computation. 


How to Build Your Own Blockchain in NodeJS

It can be helpful to think of blockchains as augmented linked lists, or arrays in which each element points to the preceding array. Within each block (equivalent to an element in an array) of the blockchain, there contains at least the following: A timestamp of when the block was added to the chain; Some sort of relevant data. In the case of a cryptocurrency, this data would store transactions, but blockchains can be helpful in storing much more than just transactions for a cryptocurrency; The encrypted hash of the block that precedes it; and An encrypted hash based on the data contained within the block(Including the hash of the previous block). The key component that makes a blockchain so powerful is that embedded in each block's hash is the data of the previous block (stored through the previous block's hash). This means that if you alter the data of a block, you will alter its hash, and therefore invalidate the hashes of all future blocks. While this can probably be done with vanilla Javascript, for the sake of simplicity we are going to be making a Node.js script and be taking advantage of Node.js's built-in Crypto package to calculate our hashes.


5 Practices to Improve Your Programming Skills

Programmers have to write better code to impress hardware and other programmers (by writing clean code). We have to write code that will perform well in time and space factors to impress hardware. There are indeed several approaches to solve the same software engineering problem. The performance-first way motivates you to select the most practical and well-performing solution. Performance is still crucial regardless of modern hardware because accumulated minor performance issues may affect badly for the whole software system in the future. Implementing hardware-friendly solutions requires computer science fundamentals knowledge. The reason is that computer science fundamentals teach us about how to use the right data structures and algorithms. Choosing the right data structures and algorithms is the key to success behind every complex software engineering project. Some performance problems could stay hidden in the codebase. Besides, your performance test suite may not cover those scenarios. Your goal should be to apply performance patches when you spot such a problem always.


Containers Vs. Bare Metal, VMs and Serverless for DevOps

The workhorse of IT is the computer server on which software application stacks run. The server consists of an operating system, computing, memory, storage and network access capabilities; often referred to as a computer machine or just “machine.” A bare metal machine is a dedicated server using dedicated hardware. Data centers have many bare metal servers that are racked and stacked in clusters, all interconnected through switches and routers. Human and automated users of a data center access the machines through access servers, high security firewalls and load balancers. The virtual machine introduced an operating system simulation layer between the bare metal server’s operating system and the application, so one bare metal server can support more than one application stack with a variety of operating systems. This provides a layer of abstraction that allows the servers in a data center to be software-configured and repurposed on demand. In this way, a virtual machine can be scaled horizontally, by configuring multiple parallel machines, or vertically, by configuring machines to allocate more power to a virtual machine.


Debunking Three Myths About Event-Driven Architecture

Event-driven applications are often criticized for being hard to understand when it comes to execution flow. Their asynchronous and loosely coupled nature made it difficult to trace the control flow of an application. For example, an event producer does not know where the events it is producing will end up. Similarly, the event consumer has no idea who produced the event. Without the right documentation, it is hard to understand the architecture as a whole. Standards like AsyncAPI and CloudEvents help document event-driven applications in terms of listing exposed asynchronous operations with the structure of messages they produce or consume and the event brokers they are associated with. The AsyncAPI specification produces machine-readable documentation for event-driven APIs, just as the Open API Specification does for REST-based APIs. It documents event producers and consumers of an application, along with the events they exchange. This provides a single source of truth for the application in terms of control flow. Apart from that, the specification can be used to generate the implementation code and the validation logic.



Quote for the day:

"Leadership is being the first egg in the omelet." -- Jarod Kintz

Daily Tech Digest - June 24, 2021

The pressure is on for technologists as they face their defining moment

For IT and business leaders, the message is clear. Technologists remain fully committed to the cause – they are desperate to have a positive impact, guide their organizations through the current crisis and leave a legacy of innovation. But it’s simply not sustainable (or fair) to ask technologists to continue as they are, when 91% say that they need to find a better work-life balance in 2021. As an industry and as business leaders, we need to be doing more to manage workload and stress, and protect wellbeing and mental health. Technologists have to be given more support to deal with the heightened level of complexity in which they are now operating. That means having access to the right tools, data, and resources, and organisations protecting their wellbeing, both inside and outside working hours. In 2018, we revealed that 9% of technologists were operating as Agents of Transformation – elite technologists with the skills, vision and ambition to deliver innovation within their organisations – but that organisations needed five times as many technologists to be performing at that level in order to compete over the next ten years.


5 Characteristics of Modern Enterprise Architect

Agile thinking is essential for not just the enterprise architect but many other IT jobs as well. However, enterprise architecture is one field in which it is essential. Agile thinking doesn’t just mean thinking fast, it means thinking fast and right. Agile thinking means that you have to adapt to situations as you improve your models and solutions. Being an agile thinker is key to having a successful career as a modern enterprise architect. As the market conditions change rapidly, you must adapt to all the changes and make the solutions robust. Data-Driven Decision Makers use facts and logic from the available information to make an informed decision. As many professionals say, everything you need is in the data available to you. Therefore, data-driven decision-making is an essential quality for any enterprise architect. This process will help identify management systems, operating routes, and much more that will align with your enterprise-level goals. One of the primary sources of data for decision-making are the users themselves. Companies usually collect data from the users in the sessions and use that data to analyze user behavior.


Best Practices To Understand And Disrupt Today’s Cybersecurity Attack Chain

Unified security must be deployed broadly and consistently across every edge. Far too many organizations now own some edge environment that is unsecured or undersecured, and cybercriminals are taking full advantage of this. The most commonly unprotected/underprotected environments include home offices, mobile workers, and IoT devices. OT environments are also often less secure than they should be, as are large hyperscale/hyper-performance data centers where security tools cannot keep up with the speed and volume of traffic requiring inspection. Security solutions also need to be integrated so they can see and talk to each other. Isolated point security products can actually decrease visibility and control, especially as threat actors begin to deliver sophisticated, multi-vector attacks that take advantage of an outdated security system’s inability to correlate threat intelligence across devices or edges in real time, or provide a consistent, coordinated response to threats. Addressing this challenge requires an integrated approach, built around a unified security platform that can be extended to every new edge environment.


rMTD: A Deception Method That Throws Attackers Off Their Game

rMTD is the process of making an existing vulnerability difficult to exploit. This can be achieved through a variety of different techniques that are either static – built in during the compilation of the application, referred as Compile Time Application Self Protection (CASP) – or dynamically enforced during runtime, referred to as Runtime Application Self Protection (RASP). CASP and RASP are not mutually exclusive and can be combined. CASP modifies the application's generated assembly code during compilation in such a way that no two compilations generate the same assembly instruction set. Hackers rely on a known assembly layout from a generated static compilation in order to craft their attack. Once they've built their attack, they can target systems with the same binaries. They leverage the static nature of the compiled application or operating system to hijack systems. This is analogous to a thief getting a copy of the same safe you have and having the time to figure out how to crack it. The only difference is that in the case of a hacker, it's a lot easier to get their hands on a copy of the software than a safe, and the vulnerability is known and published.


SREs Say AIOps Doesn’t Live Up to the Hype

Why is AIOps so slow to catch on? Ultimately, the barriers facing these tools are the same as those facing human engineers: massive and growing complexity in IT environments. As digital products become more dependent on third-party cloud services, as the number of things businesses want to track grows (from infrastructure to application to experience), the sheer volume, velocity and variety of monitoring data has exploded. ... Compounding the problem, enterprises increasingly rely on multiple “same-service” providers for IT services. That is, they use multiple cloud providers, multiple DNS providers, multiple API providers, etc. There are sound business reasons for doing so, such as adding resiliency and drawing on different vendors’ strengths in different areas. But even when two providers are doing basically the same thing, they use different interfaces and instrumentation, and their data sources often employ different metrics, data structures, and taxonomies. Whether you’re asking a human being or an AI-driven tool to solve this problem, this heterogeneity makes it extremely difficult to visualize the complete picture across the infrastructure. It also creates gray areas around how best to take advantage of each vendor’s different rules and toolsets. 


How to convince your boss that cybersecurity includes Active Directory

Here’s the punchline: Everything relies on Active Directory. To get your boss to care, start with a discussion about operations and which parts are business critical. Have a business-level discussion, with you keeping score at a technical level. For example, when your boss says “Development needs to be running 100 percent of the time,” you work backward through all the systems, applications, and endpoints that need AD to function. Repeat this until you have a sufficient list of critical workloads and business operations that require AD be secure and functional. Next, talk about which of those environments need to be protected, which contain sensitive data, and which need to be resilient against a cyberattack. Let your boss talk while you just sit back, smile, and check off the boxes of everything that relies heavily on AD. Once you are armed with enough business ammo, have the technical discussion about how each of the business functions listed by your boss rely on AD to provide users access to data, applications, systems, and environments.


Next.js 11: The ‘Kubernetes’ of Frontend Development

The main innovation behind this is that Vercel has placed the entire dev server technology, that before lived in a node process on your local machine, entirely in the web browser, Rauch said. “So, all the technology for transforming the front-end UI components is now entirely ‘dogfooded’ inside the web browser, and that’s giving us the next milestone in terms of developer performance,” he said. “It makes front-end development multiplayer instead of single player.” Moreover, by tapping into ServiceWorker, WebAssembly and ES Modules technology, Vercel makes everything that’s possible when you run Next.js on a local machine possible in the context of a remote collaboration. Next.js Live also works when offline and eliminates the need to run or operate remote virtual machines. Meanwhile, the Aurora team in the Google Chrome unit has been working on technology to advance Next.js and has delivered Conformance for Next.js and the Next.js Script Component. Rauch described Conformance as a co-pilot that helps the developer stay within certain guardrails for performance.


From Garry Kasparov to Google: Hamsa Buvaraghan’s Journey In The World of Algorithms

My fascination with AI began when I was in India, back in 1997, when I heard about IBM’s supercomputer Deep Blue defeating Garry Kasparov. This made top headlines then. After that, I wanted to explore more about this. However, access to research papers was really hard then, as I didn’t even own a computer or have access to the internet. I got introduced to computers by my father when I got access to a computer in his office at the age of 10. First thing I explored was Lotus Notes back then. With encouragement from my parents, I later pursued Computer Science Engineering. Later, when I started working, I read several IEEE research papers. I read papers like Smart games; beyond the Deep Blue horizon, Deep Blue’s hardware-software synergy. I was fascinated not only with AI then but also the application of AI to solve real problems. I was also passionate about Biomedical Engineering, which led me to books on Neural networks & AI for Biomedical Engineering and papers on Training Neural Networks for Computer-Aided Diagnosis. When it comes to machine learning, I am largely self-taught.


The CIO's Role in Maintaining a Strong Supply Chain

Access to up-to-the-minute information is essential for a CIO who hopes to maintain a strong supply chain. "Real-time data ensures that your supply team has the proper information required to make good, reliable decisions," Roberge said. "My advice is to automate as many data points as possible -- the fewer spreadsheets the better." ... Today's supply chain cannot be managed effectively or efficiently without adequate foundational tools, Furlong cautioned. "Appropriate technologies, implemented in a timely manner, can help an organization transform the supply chain and leapfrog the competition," he explained. "This includes everything from advanced predictive analytics to ... cutting-edge technologies such as blockchain, which is being used to track shipments at a micro level." CIOs also need to regularly assess and replace aging supply chain software, hardware, and network tools with modern systems leveraging both internal resources and third-party alliances. "Business requirements are changing rapidly, and supply chain technology ... must be flexible enough to handle complex business processes but also simplify supply chain processes," Furlong said.


What is stopping data teams from realising the full potential of their data?

Data warehouses have been a popular option since the 1980s, and revolutionised the data world we live in, enabling business intelligence tools to be plugged in to ask questions about the past, but looking at future insights is more difficult, and there are restrictions to the volume and formats of the data that can be analysed. Another option is data lakes, which on the other hand enable artificial intelligence (AI) to be utilised to ask questions about future scenarios. However, data lakes also have a weakness in that all data can be stored, cleaned and analysed, but can be quickly disorganised and become ‘data swamps’. Taking the best of both options, a new data architecture is emerging. Lakehouses are a technological breakthrough that finally allows businesses to both look to future scenarios and back to the past in the same space, at the same time, revolutionising the future of data capabilities. It’s the solution enterprises have been calling out for throughout the last decade at least; by combining the best elements of the data warehouse and data lake, the lakehouse enables enterprises to implement a superior data strategy, achieve better data management, and squeeze the full potential out of their data.



Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks