Daily Tech Digest - July 02, 2021

Addressing the cybersecurity skills gap through neurodiversity

It’s time to challenge the assumption that qualified talent equals neurotypicality. There are many steps companies can take to ensure inclusivity and promote belonging in the workplace. Let’s start all the way at the beginning and focus on job postings. Job postings should be black and white in terms of the information they are asking for and the job requirements. Start by making job postings more inclusive and less constrictive in what is being required. Include a contact email address where an applicant can ask for accommodations, and provide a less traditional approach by providing these accommodations. Traditional interviews can be a challenge for neurodivergent individuals, and this is often the first hurdle to employment. For example, to ease some candidates’ nerves, you could provide a list of questions that will be asked as a guideline. More importantly, don’t judge someone based on their lack of eye contact. To promote an inclusive and belonging culture of neurodiversity in the workplace, the workplace should be more supportive of different needs.


Cost of Delay: Learn Why Your Organisation Is Losing Millions

Backlogs in business can cause a drop in revenue. This is why some experts say that if you want to make profit or save money, you have to prioritize your backlog in terms of money. Bear in mind that each product or project has different features or benefits. Consumers often think all these features are important. But the reality is that each feature takes a different time to create and implement. They also don’t have the same level of worth in the business. Prioritizing one means limiting or delaying the other. And every day that a feature is not in production means another day that the company is not profiting from it. By utilizing the Cost of Delay, the company can determine which feature will cost them the most by a delay in the delivery. It also sets a clear guideline on what projects would matter most for the company and other stakeholders without the friction of other decision making obstacles which bring us to the next point below. ... The MosCow method often puts everything in the “Must-Have” bucket. Imagine if your company has a limited resource and manpower. The quality of work and output will surely suffer.


Google releases new open-source security software program: Scorecards

These Scorecards are based on a set of automated pass/fail checks to provide a quick review of many open-source software projects. The Scorecards project is an automated security tool that produces a "risk score" for open-source programs. That's important because only some organizations have systems and processes in place to check new open-source dependencies for security problems. Even at Google, though, with all its resources, this process is often tedious, manual, and error-prone. Worse still, many of these projects and developers are resource-constrained. The result? Security often ends up a low priority on the task list. This leads to critical projects not following good security best practices and becoming vulnerable to exploits. The Scorecards project hopes to make security checks easier to make security easier to achieve with the release of Scorecards v2. This includes new security checks, scaled up the number of projects being scored, and made this data easily accessible for analysis. For developers, Scorecards help reduce the toil and manual effort required to continually evaluate changing packages when maintaining a project's supply chain. 


Kubernetes Fundamentals: Facilitating Cloud Deployment and Container Simplicity

Kubernetes has made containers so popular, they are threatening to make VMs (virtual machines) obsolete. A VM is an operating system or software program that imitates the behavior of a physical computer, and can run applications and programs as though it were a separate computer. A virtual machine can be unplugged from one computer, and plugged into another, bringing its software environment with it. Both containers and VMs can be customized and designed to any specifications desired and provide isolated processes. Both VMs and containers offer complete isolation, providing an environment for experimentation that will not affect the “real” computer. Typically, containers do not include a guest operating system, and usually come with only the application code, and only run the necessary operations needed. This is made possible by using “kernel features” from the physical computer. A kernel is the core program of a computer operating system, and has complete control over the entire system. On most computers, it is often the first program (after the bootloader) to be loaded on start-up.


IoT is the Key to Reopening Safe Workplaces

By implementing IoT connected devices for predictive cleaning, building managers can improve the overall efficiency and cleanliness of shared spaces. For example, IoT sensors can notify facility managers when soap dispensers and towels are running low so they can replace them immediately without a manual check. Predictive cleaning can lower infection rates and costs by enabling on-demand and as-needed cleaning to ensure common areas such as restrooms and conference rooms are safe for employees to use. Freespace created a Cleanreader solution that works by using sensors to collect occupancy data. It provides facility managers and cleaning staff with the data they need to ensure that desks, meeting rooms and communal areas are cleaned and disinfected between users. Our expectation as workers and consumers has reached a new baseline. We want to be able to see what businesses are doing to be safe and to know they are addressing how to avoid future impacts of this pandemic or any future major health crisis. Clearly, workers are concerned about the safety of their work environments. OSHA data shows more than 60,000 COVID-19-related complaints have been filed to the agency’s state and federal offices, as of March 28, 2021.


The Most Prolific Ransomware Families: A Defenders Guide

DomainTools researchers feel that it is important to remind readers that all of these groups make alliances, share tools, and sell access to one another. Nothing in this space is static and even though there is a single piece of software behind a set of intrusions there are likely several different operators using that same piece of ransomware that will tweak its operation to their designs. The playbook of the affiliate programs that many of these ransomware authors run is to design a piece of ransomware and then sell it off for a percentage of the ransom gained. Think of it as a cybercrime multi-level marketing scheme. Often there is a builder tool that allows the affiliate to customize the ransomware to their needs for a specific target which at the same time tweaks the software slightly so it can evade standard, static detection mechanisms. This article’s intent is not to dive deep into tracking individual affiliates or into each of the stages of a piece of packed malware (looking at you, CobaltStrike), but just to the top level of software used and their relations. Lastly, we must mention that access for the ransomware is often being provided by an initial backdoor or botnet, frequently called an initial access broker.


The next frontier of digital transformation: Are you onboard?

All the transformations are going to bring a lot of confidential data online and some in the public domain. This data will need sufficient protection from getting hacked and misused. So the next big digital transformation will be in the field of cybersecurity. Mathias cautions on the safety of customer data while adopting digital as a means of business. “Brands have to be very sensitive to data privacy concerns of consumers even as they need to provide a real time intuitive experience. This is a fine balance that many brands struggle with, as in the digital world users expect similar levels of customer experience from a local on-line retailer as they would from global giants like Amazon,” he adds. Tibrewala also noted that customer data is becoming more important than ever before. “Brands will need to invest in technologies like customer data platform and marketing automation to assimilate customer data; generate a single view of the customer across online and offline channels, and then use machine intelligence to provide the customer with the best possible solution for their requirement.”


Using collections to make your SQL access easier and more efficient

Collections are essentially indexed groups of data that have the same type—for instance arrays or lists (arrays, for instance, are collections of index-based elements). Most programming languages in general provide support for collections. Collections reduce the number of database calls due to caching (cached by the collections themselves) of regularly accessed static data. Reduced calls equals higher speed and efficiency. Collections can also reduce the total code needed for an application, further increasing efficiency. Each element in a collection has a unique identifier called a subscript. Collections come with their own set of methods for operating on the individual elements. PL/SQL includes methods for manipulating individual elements or the collection in bulk. ... Earlier versions of PL/SQL used what were known first as PL/SQL tables and later index-by tables. In a PL/SQL table, collections were indexed using an integer. Individual collection elements could then be referenced using the index value. Because it was not always easy to identify an element by its subscript, PL/SQL tables evolved to include indexing by alphanumeric strings.


Single page web applications and how to keep them secure

The architecture of SPAs presents new vulnerabilities for hackers to exploit because the attack surface shifts away from the client-layers of the app to the APIs, which serve as the data transport layer that refreshes the SPA. With multi-page web apps, security teams need to secure only the individual pages of the app to protect their sensitive customer data. Traditional web security tools such as web application firewalls (WAFs) cannot protect SPAs because they do not address the underlying vulnerabilities found in the embedded APIs and back-end microservices. For example, in the 2019 Capital One data breach, the hacker reached beyond the client layer by attacking Capital One’s WAF and extracted data by exploiting underlying API-driven cloud services hosted on AWS. SPAs require a proper indexing of all their APIs, similar to how multi-page web apps require an indexing of their individual pages. For SPAs, vulnerabilities begin with the APIs. Sophisticated hackers will often begin with multi-level attacks that reach through the client-facing app and look for unauthenticated, unauthorized, or unencrypted APIs that are exposed to the internet to hack and extract customer data.


Could cryptocurrency be as big as the Internet?

As with every nascent technology, of course, we are not yet seeing all of cryptocurrency’s potential. Yet, the winds of change are blowing. Payments is one small aspect of what Bitcoin and cryptocurrencies enable. With the unique ability of having programmatically financial instruments, the ecosystem of technology being built on top of that foundation is enabling diverse new use cases. Solutions like the Lightning Network on top of Bitcoin for fast, small payments, or collateral-based loans for fast liquidity, start to create possibilities beyond the foundational aspects of bitcoin and other cryptos. This could not have come at a better time. Following the pandemic, large retailers are increasingly determined to move to a 100% cashless model. For them, the cost of handling cash across thousands of different stores is an added expense that they want to divest. Moving to more digital payment structures, including the adoption of cryptocurrencies, is a path many will start to follow over the next year. Yet, there are also security benefits to consider as well. The cryptographic certainty of cryptocurrencies adds an extra security layer for financial institutions by eliminating forgery risk or counterparty risk that any other current financial instrument has today.



Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson

Daily Tech Digest - July 01, 2021

How CIO Roles Will Change: The Future of Work

On the IT side, CIOs sent workers home with laptops and video conferencing software last year. But it's time to reexamine whether those simple tools are adequate. Do workers need bigger displays? Do they need more than one monitor? What about webcams and better microphones, particularly if they are representing the corporate brand in virtual meetings with external partners and customers. Other technologies that are getting more attention include anything to do with security in this age of distributed work such as edge security, and VPNs. Companies are also reevaluating their unified collaboration and communications technologies as they look to enhance collaboration in a virtual setting. Employees are spending more time using software such as Microsoft Teams, Cisco Webex, and Zoom. How can those tools be improved? "CIOs have moved from infrastructure officers to innovation officers," Banting said. "CIOs are finding out what technology can do for the business, how it meets their needs, and how it makes them more agile by promoting distributed working. Technology can be used as an asset rather than a liability on the books. That's quite a fundamental shift in the IT department and the roles that CIOs play."


Composable commerce: building agility with innovation

Composable commerce is a microservices and modularised architecture that provides organisations with agility through quick, application programming interface (API) driven integrations, from catalogues and product searches, to order submissions, inventory, and recommendations. It provides seamless communication between various applications, giving customers new ways to interact and connect with brands on a personal level. Development teams can focus their efforts on speed and innovation, while operations can make time for back-end updates, compliance releases, and testing. All this can be done without affecting front or back-end operations. It provides collaboration between departments so development, operations, marketing, ecommerce, data, finance, and other areas can align and become an agile platform. Everything can work together cohesively and with siloes no longer existing, products can be brought to market quickly and efficiently without manual intervention.


New approaches for a new era: the mission-critical tools for post-Covid business success

Taking an agile approach enables workforces – especially project management teams – to adapt quickly and easily, promoting creative, out-of-the-box thinking throughout the business. Businesses that have embraced business agility have found that teams work better together, and their decision-making processes often become much quicker than would have been possible otherwise. To enable adaptability, employers need to find ways to drive employee engagement and efficiency regardless of where people are. ... The uptake of innovative technologies that drive true workplace collaboration spans broader work management platforms offered by a range of global providers, communication apps such as Microsoft Teams and Slack, and toolchains for developing and deploying software such as Azure DevOps. Their use has been made easier because they can often be integrated, allowing teams to use the tools they want for various purposes while still keeping collaborative efforts connected. These types of intuitive solutions enable enterprises to rapidly adjust tactics, resources and personnel to keep operations on course when business conditions shift dramatically – providing organizations with a competitive edge through the current health and economic crisis and in a post-Covid world.


Microsoft and Google prepare to battle again after ending six-year truce

The pact was reportedly forged to avoid legal battles and complaints to regulators. It meant we haven’t seen Microsoft and Google complaining publicly about each other since the days of Scroogled, a campaign that attacked Google’s privacy policies. Now the gloves appear to be off once again, and we’ve seen some evidence of that recently. Google slammed Microsoft for trying to “break the way the open web works” earlier this year, after Microsoft publicly supported a law in Australia that forced Google to pay news publishers for their content. Microsoft also criticized Google’s control of the ad market, claiming publishers are forced to use Google’s tools that feed Google’s revenues. The rivalry between the two has been unusually quiet over the past five years, thanks to this legal truce. Microsoft was notably silent during the US government’s antitrust suit against Google last year, despite being the number two search engine at the time. The Financial Times reports that the agreement between Microsoft and Google was also supposed to improve cooperation between the two firms, and Microsoft was hoping to find a way to run Android apps on Windows.


Continuous Integration and Deployment for Machine Learning Online Serving and Models

One thing to note is we have continuous integration (CI)/continuous deployment (CD) for models and services, as shown above in Figure 1. We arrived at this solution after several iterations to address some of MLOps challenges, as the number of models trained and deployed grew rapidly. The first challenge was to support a large volume of model deployments on a daily basis, while keeping the Real-time Prediction Service highly available. We will discuss our solution in the Model Deployment section. The memory footprint associated with a Real-time Prediction Service instance grows as newly retrained models get deployed, which presented our second challenge. A large number of models also increases the amount of time required for model downloading and loading during instance (re)start. We observed a great portion of older models received no traffic as newer models were deployed. We will discuss our solution in the Model Auto-Retirement section. The third challenge is associated with model rollout strategies. Machine learning engineers may choose to roll out models through different stages, such as shadow, testing, or experimentation. 


After EI, DI?

In thinking through what a practical model of digital intelligence might look like, we thought it would be useful to identify three elements that make up best practices for operating in a digital environment. One is the analytical and cognitive component — in essence, how to make sense of the welter of information and data that the digital world offers. The second is the need to collaborate with others in new ways and new mediums. The third is the practical mastery and application we need to demonstrate. This third element is akin to how Robert J. Sternberg, James C. Kaufman and Elena L. Grigorenko describe “practical intelligence”; that is to say, how we manage real world situations or, in our case, navigate the digital world successfully. This is an ability, we would argue, that entails a different or least greatly modified set of skills from that we use in face-to-face environments. ... We aren’t proposing that digital intelligence be treated as true intelligence, but rather as a loose framework to help us identify the knowledge, skills, attitudes and behaviors that make up the “digital sensibility” needed to operate and succeed in increasingly digital organizations and marketplaces. 


SRE vs DevOps: Comparing Two Distinct Yet Similar Software Practices

CTOs, product managers, software executives, process specialists are looking for newer ways to enhance the trustworthiness of their software systems without any compromise on the speed and quality. SRE and DevOps are two such software methodologies that are popular today, in the world of software development. What does SRE stand for? SRE stands for Site Reliability Engineering. Both these procedures are supposed to be sharing a similar line of principles and goals that makes them compete. They look like two sides of the same coin, targeting to lessen the gap between the development and operation teams. Yet, they have their own distinct characteristics that make them contrast. Rather than being two competing procedures for software operations, SRE and DevOps are more like pals that work together to solve organizational hurdles and deliver software in a fast manner. It is interesting to understand what these concepts individually mean, what they have in common, how they differ from each other, and how they fit each other like missing pieces of any puzzle.


How to support collaboration between security and developers

Like everyone else, security people want to see the company succeed, and see cool stuff happen. Developers also care about more than just delivery of code; plus they know that if something bad happens, there are significant implications that they want to avoid. While open lines of communication and mutual understanding are key it is equally important that DevSecOps teams have a toolset that is similarly integrated and capable of tracking and addressing the changes that might be happening in your organization. Whether we’re talking about changes in cloud providers, the deployment stack, or something else, there is a clear need to have a platform that will work where you are—in the cloud or on-premises. ... While tools are an essential element of enabling DevSecOps, there remain other challenges to be resolved. These include the “unknown unknowns” that organizations encounter as they speed up their digital transformation. For example, organizations across the board rushed to scale up their cloud environments in response to the pandemic last year. However, when rushing to do so many did not scale up their security and governance processes at the same time and rate.


5 Mistakes I Wish I Had Avoided in My Data Science Career

Do I want to be a data engineer or a data scientist? Do I want to work with marketing & sales data, or do the geospatial analysis? You may have noticed that I have been using the term DS so far in this article as a general term for a lot of data-related career paths (e.g. data engineer, data scientist, data analyst, etc.); that’s because the lines are so blurred between these titles in the data world these days, especially in smaller companies. I have observed a lot of data scientists see themselves as ONLY data scientists building models and don’t pay attention to any business aspects, or data engineers who only focus on data pipelining and don’t want to know anything about the modeling that’s going on in the company. The best data talents are the ones who can wear multiple hats or are at least able to understand the processes of other data roles. This comes in especially handy if you want to work in an early stage or growth stage startup, where functions might not be as specialized yet and you are expected to be flexible and cover a variety of data-related responsibilities. 


Responsible applications of technology drive real change

Thanks to the Digital Revolution, many things that seemed impossible just a few years ago are now commonplace. No one can deny that our productivity – and indeed, enjoyment – has been dramatically improved by technologies ranging from AI to Big Data, 5G and the IoT. While new applications for these technologies are being found seemingly every day, it’s increasingly important to ask how we can utilise technology in a responsible way, to change and improve people’s lives in critical areas like education, healthcare and the environment. The good news is that work is already underway to apply technology in meaningful ways. Take, for instance, the support being provided for young African women programmers in marginalised communities. They are benefitting from free online training and free access to cloud computing resources. The aim of this project is to create one million female coders by 2030 with the objective of improving their life outcomes by helping them along a career path in engineering and other practical subjects. The iamtheCODE initiative provides them with tailored courses on a range of technical topics including cloud computing, data analysis, machine learning and security.



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis

Daily Tech Digest - June 30, 2021

DigitalOcean aligns with MongoDB for managed database service

There is, of course, no shortage of DBaaS options these days. DigitalOcean is betting that its Managed MongoDB service will not only extend the appeal of its cloud service to developers, but also to SMBs that are looking for less costly alternatives to the three major cloud service providers, Cooks said. MongoDB already has a strong focus on developers who prefer to download an open source database to build their applications. In addition to not having to pay an upfront licensing fees, in many cases developers don’t need permission from a centralized IT function to download a database. However, once that application is deployed in a production environment, some person or entity will have to manage the database. That creates the need for the DBaaS platform from MongoDB that DigitalOcean is now reselling as an OEM partner, said Alan Chhabra, senior vice president for worldwide partners at MongoDB. The DigitalOcean Managed MongoDB service is an extension of an existing relationship between the two companies that takes managed database services to the next logical level, Chhabra asserted. “We have a long-standing relationship,” he said.


Digital transformation at SKF through data driven manufacturing approach using Azure Arc enabled SQL

As SKF looked for a solution that supported their data-driven manufacturing vision for the Factories of the Future, they wanted a solution that was able to support distributed innovation and development, high availability, scalability and ease of deployment. They wanted each of their factories to be able to collect, process, analyze data to make real-time decisions autonomously while being managed centrally. At the same time, they had constraints of data latency, data resiliency and data sovereignty for critical production systems that could not be compromised. The drivers behind adopting a hybrid cloud model came from factories having to meet customer performance requirements, many of which depend on ability to analyze and synthesize the data. Recently, the Data Analytics paradigms have shifted from Big Data Analysis in the cloud to more Data-Driven Manufacturing at the machine, production line and factory edge. Adopting cloud native operating models but in such capacity where they can execute workloads physically on-premises at their factories turned out to be the right choice for SKF.


A new dawn for enterprise automation – from long-term strategy to an operational imperative

To drive sustainable change, organisations need to take a large-scale, end-to-end strategic approach to implementing enterprise automation solutions. On one level, this is a vital step to avoid any future architecture problems. Businesses need to spend time assessing their technology needs and scoping out how technology can deliver value to their organisation. Take, for example, low code options like Drag and Drop tools. This in vogue technology is viewed by companies as an attractive, low-cost option to create intuitive interfaces for internal apps that gather employee data – as part of a broad automation architecture. The issue is lots of firms rush the process, failing to account for functionality problems that regularly occur when integrating into existing, often disparate systems. It is here where strategic planning comes into its own, ensuring firms take the time to get the UX to the high standard required, as well as identify how to deploy analytics or automation orchestration solutions to bridge these gaps, and successfully deliver automation. With this strategic mindset, there is a huge opportunity for businesses to use this thriving market for automation to empower more innovation from within the enterprise.


The Rise Of NFT Into An Emerging Digital Asset Class

The nature of NFTs being unique, irreplaceable, immutable, and non-fungible makes them an attractive asset for investors and creators alike. NFTs have empowered creators to monetize and value their digital content, be it music, videos, memes, or art on decentralized marketplaces, without having to go through the hassles that a modern-day creator typically goes through. NFTs, at their core, are digital assets representing real-world objects. ... NFTs solve the age-old problems that creators like you and I have always faced when protecting our intellectual property from being reproduced or distributed across the internet. The most popular standard for NFTs today are ERC-721 and ERC-1155. ERC-721 has been used in a majority of early NFTs until ERC-1155 was introduced. With that said, these token standards have laid the foundation for assets that are programmable and modifiable; therefore, setting the cornerstone for digital ownership leading to all sorts of revolutionary possibilities. The NFT ecosystem has found its way into various industries as more people join hands and dive deeper into its novel possibilities. 


Three Principles for Selecting Machine Learning Platforms

Of the challenges this company faced from its previous data management system, the most complex and risky was in data security and governance. The teams managing data access were Database Admins, familiar with table-based access. But the data scientists needed to export datasets from these governed tables to get data into modern ML tools. The security concerns and ambiguity from this disconnect resulted in months of delays whenever data scientists needed access to new data sources. These pain points led them towards selecting a more unified platform that allowed DS & ML tools to access data under the same governance model used by data engineers and database admins. Data scientists were able to load large datasets into Pandas and PySpark dataframes easily, and database admins could restrict data access based on user identity and prevent data exfiltration. ... A data platform must simplify collaboration between data engineering and DS & ML teams, beyond the mechanics of data access discussed in the previous section. Common barriers are caused by these two groups using disconnected platforms for compute and deployment, data processing and governance.


Introduction To AutoInt: Automatic Integration For Fast Neural Volume Rendering

AutoInt, also known as Automatic integration, is a modern image rendering library used for high volume rendering using deep neural networks. It is used to learn closed-form solutions to an image volume rendering equation, an integral equation that accumulates transmittance and emittance along rays to render an image. While conventional neural renderers require hundreds of samples along each ray to evaluate such integrals and require hundreds of costly forward passes through a network, AutoInt allows evaluating these integrals with far fewer forward passes. For training, it first instantiates the computational graph corresponding to the derivative of the coordinate-based network. The graph is then fitted to the signal to integrate. After optimization, it reassembles the graph to obtain a network that represents the antiderivative. Using the fundamental theorem of calculus enables the calculation of any definite integral in two evaluations of the network. By applying such an approach to neural image rendering, the tradeoff between rendering speed and image quality is improved on a greater scale, in turn improving render times by greater than 10× with a tradeoff of slightly reduced image quality.


How Google is Using Artificial Intelligence?

In the old times, we were much dependent on the paper map or the suggestions of people well-versed with the tracks of our destinations. But with that, the problem was we never reached on time to our spots. Now, you need not seek such suggestions from the people or a paper Map as Google Maps has solved the related difficulties. With territories and over 220 countries like Delhi, the United States, Pakistan, Australia, etcetera one can affordably reach the places already decided. You may curiously ask about the technology embedded and the answer for this is Artificial Intelligence. The main concept is global localization which is relying on AI. This helps Google Maps understand your current or futuristic orientation. Later, it lets the application precisely spot your longitudinal and latitudinal extent and as you or your vehicle proceed further, Google Maps starts localizing hundreds of trillions of street views. As you keep on traversing, the application announces a series of suggestions thereby helping you reach a shopping mall, airport, or other transit stations. Apart from this, you can prepare a list of places you will visit, set routing options as per your preferences, explore the Street View option in Live mode, and so on. 


What is edge computing and why does it matter?

There are as many different edge use cases as there are users – everyone’s arrangement will be different – but several industries have been particularly at the forefront of edge computing. Manufacturers and heavy industry use edge hardware as an enabler for delay-intolerant applications, keeping the processing power for things like automated coordination of heavy machinery on a factory floor close to where it’s needed. The edge also provides a way for those companies to integrate IoT applications like predictive maintenance close to the machines. Similarly, agricultural users can use edge computing as a collection layer for data from a wide range of connected devices, including soil and temperature sensors, combines and tractors, and more. The hardware required for different types of deployment will differ substantially. ... Connected agriculture users, by contrast, will still require a rugged edge device to cope with outdoor deployment, but the connectivity piece could look quite different – low-latency might still be a requirement for coordinating the movement of heavy equipment, but environmental sensors are likely to have both higher range and lower data requirements – an LP-WAN connection, Sigfox or the like could be the best choice there.


Artificial Intelligence (AI): 4 novel ways to build talent in-house

To discover the gems hidden across your organization, you must start maintaining a self-identified list of skills for every employee. The list must be updated every six months and be openly searchable by associates to make it useful and usable. Palmer recommends self-classifying each individual’s skills into four categories: expert, functioning, novice, and desired stretch assignment. This allows teams with hiring needs to scout for individuals with ready skills and those with growth aspirations in the five competencies needed for AI. Finding the right content to upskill your in-house teams is a challenge. Despite the rapid mushrooming of training portals and MOOCs (massive open online courses), the curriculums may not meet your organization’s specific needs. However, with access to such great content online, often for free, it may not make sense to recreate your content. “You must design your own curriculum by curating content from multiple online sources,” says Wendy Zhang, director of data governance and data strategy at Sallie Mae. Base the training plan on your team’s background, roles, and what they need to succeed. 


Solving Mysteries Faster With Observability

Let's start by looking at the sources that we turn to when we look for clues. We often begin with observability tooling. Logs, metrics, and traces are the three pillars of observability. Logs give a richly detailed view of an individual service and provide the service a chance to speak its own piece about what went right or what went wrong as it tried to execute its given task. Next, we have metrics. Metrics indicate how the system or subsets of the system, like services, are performing at a macro scale. Do you see a high error rate somewhere, perhaps in a particular service or region? Metrics give you a bird's eye view. Then we have traces, which follow individual requests through a system, illustrating the holistic ecosystem that our request passes through. In addition to observability tooling, we also turn to metadata. By metadata, I mean supplemental data that helps us build context. For us at Netflix, this might be, what movie or what show was a user trying to watch? What type of device were they using? Or details about the build number, their account preferences, or even what country they're watching from. Metadata helps add more color to the picture that we're trying to draw.



Quote for the day:

"A sense of humor is part of the art of leadership, of getting along with people, of getting things done." -- Dwight D. Eisenhower

Daily Tech Digest - June 29, 2021

How to eliminate ransomware risk, not just manage it

There are a multitude of solutions available, all of which are designed to reduce risk and protect specific areas of the network. However, there is one method that is rising in popularity and has proven to be highly effective. Zero Trust approaches to security are being applied by organisations on a daily basis, developed on the grounds that trust should never be given out superfluously – transitioning from “Trust but Verify” to “Verify, then Trust”. Forrester recently announced that Zero Trust can reduce an organisation’s risk exposure by 37% or more. This model eliminates automatic access for any asset, whether internal or external. It instead assumes that the context of any action must be validated before it can be allowed to proceed. Another technique that has emerged as being one of the best for protecting businesses from ransomware attacks, and that is closely aligned to the Zero Trust model, is micro-segmentation. Micro-segmentation restricts adversary lateral movement through the network and reduces a company’s attack surface. A strong security perimeter, whilst important, is no longer enough to protect business IT networks from ransomware threats – since it just takes one breach of the perimeter to compromise the network.


How to conquer synthetic identity fraud

The arrival of our truly physical-digital existence has forced identity protection to the forefront of our minds and amplified the need to understand how, through technology, our identities and behavior can be used to equalize and authenticate our access to all of life’s experiences. Second, there’s been an exceptional rise in all types fraud, including synthetic. Tackling this will require an intelligent, coordinated defense against cybercriminals employing new and more sophisticated techniques. Not unlike a police database that tracks criminals in different states, there’s a need for platforms where companies can anonymously share data signatures about bad actors with one another so that fraudulent activity becomes much easier to detect. According to the Aite Group, 72% of financial services firms surveyed believe synthetic identity fraud is a much more pressing issue than identity theft, and the majority plan to make substantive changes in the next two years. With collaboration driving that change, we have seen some cases of increasing synthetic fraud detection by more than 100% and the ability to catch overall forged documents by 8% in certain platforms.


A Look at GitOps for the Modern Enterprise

In the GitOps workflow, the system’s desired configuration is maintained in a source file stored in the git repository with the code itself. The engineer will make changes to the configuration files representing the desired state instead of making changes directly to the system via CLI. Reviewing and approving of such changes can be done through standard processes such as — pull requests, code reviews, and merges to the master branch. When the changes are approved and later merged to the master branch, an operator software process is accountable for switching the system’s current state to the desired state based on the configuration stored in the newly updated source file. In a typical GitOps implementation, manual changes are not allowed, and all changes to the configuration should be done to files put in Git. In a severe case, authority to change the system is given just to the operator software process. In a GitOps model, the infrastructure and operations engineers’ role changes from implementing the infrastructure modifications and application deployments to developing and supporting the automation of GitOps and assisting teams in reviewing and approving changes via Git.


Preventing Transformational Burnout

Participation can be viewed as a strain—it’s a tool that comes in different sizes and models and it is useful. Still, when individuals are forced to participate in anything that doesn’t resonate with their inner motivation, a leader is the one pulling the trigger of burnout. Note that passion is often thought to serve as a band-aid to the individual burnout when there is the perception that, “I care so much I must put all my efforts in the matter.” In situations where management doesn’t wish to share decision-making control with others, where employees or other stakeholders are passive or apathetic (or suffering from individual burnout), or in organizational cultures that take comfort in bureaucracy, pushing participatory efforts may be unwise. Luckily, agile stems from participation and self-organization. As you plan for employee participation in your transformation efforts, it’s important to have realistic expectations. Not all “potential associates” desire to participate and those that do may not yet have the skills to do so productively. As Jean Neumann found in her research on participation in the manufacturing industry, various factors can lead individuals to rationally choose to “not” participate. Neumann further notes, as have others, that participation requires courage.


How AutoML helps to create composite AI?

The most straightforward method for solving the optimization task is a random search for the appropriate block combinations. But a better choice is meta-heuristic optimization algorithms: swarm and evolutionary (genetic) algorithms. But in the case of evolutionary algorithms, one should keep in mind that they should have specially designed crossover, mutation, and selection operators. Such special operators are important for processing the individuals described by a DAG, they also give a possibility to take multiple objective functions into account and include additional procedures to create stable pipelines and avoid overcomplication. The crossover operators can be implemented using subtree crossover schemes. In this case, two parent individuals are chosen and exchange random parts of their graphs. But this is not the only possible way of implementation, there may be more semantically complex variants (e.g., one-point crossover). Implementation of the mutation operators may include random change of a model (or computational block) in a random node of the graph, removal of a random node, or random addition of a subtree.


AI is driving computing to the edge

Companies are adopting edge computing strategies because the cost of sending their ever-increasing piles of data to the cloud — and keeping it there — has become too expensive. Moreover, the time it takes to move data to the cloud, analyze it, and then send an insight back to the original device is too long for many jobs. For example, if a sensor on a factory machine senses an anomaly, the machine’s operator wants to know right away so she can stop the machine (or have a controller stop the machine). Round-trip data transfer to the cloud takes too long. That is why many of the top cloud workloads seen in the slide above involve machine learning or analysis at the edge. Control logic for factories and sensor fusion needs to happen quickly for it to be valuable, whereas data analytics and video processing can generate so much data that sending it and working on that data in the cloud can be expensive. Latency matters in both of those use cases as well. But a couple of other workloads on the slide indicate where the next big challenge in computing will come from. Two of the workloads listed on the slide involve data exchanges between multiple nodes. 


Data-Wiping Attacks Hit Outdated Western Digital Devices

Storage device manufacturer Western Digital warns that two of its network-attached storage devices - the WD My Book Live and WD My Book Live Duo - are vulnerable to being remotely wiped by attackers and now urges users to immediately disconnect them from the internet. ... The underlying flaw in the newly targeted WD devices is designated CVE-2018-18472 and was first publicly disclosed in June 2019. "Western Digital WD My Book Live (all versions) has a root remote command execution bug via shell metacharacters in the /api/1.0/rest/language_configuration language parameter. It can be triggered by anyone who knows the IP address of the affected device," the U.S. National Vulnerability Database noted at the time. Now, it says, the vulnerability is being reviewed in light of the new attacks. "We are reviewing log files which we have received from affected customers to further characterize the attack and the mechanism of access," Western Digital says. "The log files we have reviewed show that the attackers directly connected to the affected My Book Live devices from a variety of IP addresses in different countries. 


IoT For 5G Could Be Next Opportunity

On the consumer front, a technology currently being planned for inclusion in the forthcoming 3GPP Release 17 document called NR Light (or Lite), looks very promising. Essentially functioning as a more robust, 5G network-tied replacement for Bluetooth, NR Light is designed to enable the low latency, high security, and cloud-powered applications of a cellular connection, without the high-power requirements for a full-blown 5G modem. Practically speaking, this means we could see things like AR headsets, that are tethered to a 5G connected smartphone, use NR Light for their cloud connectivity, while being much more power-friendly and battery efficient. Look for more on NR Light in these and other applications that require very low power in 2022. At the opposite end of the spectrum, some carriers are starting the process of “refarming” the radio spectrum they’re currently using to deliver 2G and 3G traffic. In other words, they’re going to shut those networks down in order to reuse those frequencies to deliver more 5G service. The problem is, much of the existing IoT applications are using those older networks, because they’re very well-suited to the lower data rates used by most IoT devices.


Security and automation are top priorities for IT professionals

Organizations across the globe have experienced crippling cyberattacks over the past year that have significantly impacted the global supply chain. Due to the growing number of threats, 61% of respondents said that improving security measures continues to be the dominant priority. Cybersecurity systems topped the list of what IT professionals plan to invest in for 2022, with 53% of respondents planning to budget for email security tools such as phishing prevention, and 33% of respondents investing in ransomware protection. Cloud technologies were also top of mind this year, with 54% saying their IaaS cloud spending will increase and 36% anticipating growth in spending on SaaS applications. Cloud migration was also a high priority for respondents in 2021, which accounted for migrations across PaaS, IaaS and SaaS software. IT professionals also want to increase their productivity through automation, which ranked second in top technologies for investment. Almost half of respondents stated that they will allocate funds for this in 2021.


Making fungible tokens and NFTs safer to use for enterprises

One of the weaknesses of current token exchange systems is the lack of privacy protection they feature beyond a very basic pseudonymization. In Bitcoin, for example, transactions are pseudonymous and reveal the Bitcoin value exchanged. That makes them linkable and traceable, presenting threats that are inadmissible in other settings such as enterprise networks, in a supply chain or in finance. While some newer cryptocurrencies offer a higher degree of privacy, entirely concealing the actual asset exchanged and transaction participants, they retain the permissionless character of Bitcoin and others, which presents challenges on the regulatory compliance side. For enterprise blockchains, a permissioned setting is required, in which the identity of participants issuing and exchanging tokens is concealed, yet non-repudiatable, and transaction participants can be securely identified upon properly authorized requests. A big conundrum in permissioned blockchains exists in accommodating the use of token payment systems while at the same time preserving the privacy of the parties involved and still allowing for auditing functionalities.



Quote for the day:

"The quality of a leader is reflected in the standards they set for themselves." -- Ray Kroc

Daily Tech Digest - June 28, 2021

Is Quantum Supremacy A Threat To The Cryptocurrency Ecosystem?

Quantum computing has always been a potential threat to cryptocurrencies ever since the birth of Bitcoin and its contemporaries. Before we can understand why? Let’s dive into how cryptocurrency transactions work. Let’s take Bitcoin as an example. Bitcoin is a decentralized peer-to-peer system for transferring value. This means that unlike traditional financial institutions that mediate a lot of the processes, Bitcoin users facilitate themselves, such as creating their own addresses. By means of complex algorithms, users calculate a random private key and public address to perform transactions and keep them secure. At the moment, cryptographically secure private keys serve as the only protection for users’ funds. If quantum computers cracked the encryption safeguarding the private keys, then it would likely mark the end for bitcoin and all cryptocurrencies. Not like we want to be the bearer of bad news, but there are already multiple quantum shortcuts that bypass public-key cryptography. Examples like Shor’s algorithm enable the extraction of private keys from any public key.


Will open-source Postgres take over the database market? Experts weigh in

Stalwart IBM is now offering an EDB Postgres solution. IBM’s Villalobos thinks it’s as much an enterprise modernization requirement as anything else that’s driving demand for the open-source, cloud-oriented product. He thinks customers should use IBM’s Rocket cloud (methodology that helps customers appraise all database modernization options). “Not all customers totally understand where they are today,” he said. Analyzing what costs could be freed-up is a part of what IBM offers. Part of what is driving the movement toward EDB and PostgreSQL, is that Postgres has done a good job on getting EDB working for enterprise, according to Villalobos. “Is there any database that can finally replace Oracle in the world,” Young asked. The interoperability and compatibility between Oracle and EDB makes EDB a contender, he believes. Tangible difference, according to Sheikh, include getting environments running more quickly, with “less latency in terms of agility,” and the ability for deployment anywhere being attractive factors for the data scientist. 


The biggest post-pandemic cyber security trends

Corporate culture often brings to mind ping pong tables, free beers on Fridays and bean bags, but what about security and business continuity planning? Well, they are about to become just as important. Cyber security needs to become front of mind for all employees, not just those who work in IT. One way to do this is through “war-gaming” activities, training employees and creating rapid response plans to ensure that everyone is prepared in case of an attack. The roles of the C-suite have been changing for quite some time now, as each role becomes less siloed and the whole company is viewed more as a whole than as separate teams. The evolution of the chief information security officer (CISO) is a great example of this – rather than being a separate role to the chief information officer (CIO) or chief technology officer (CTO), the CISO is now responsible for security, customer retention, business continuity planning and much more. Now, this change needs to spread to the rest of the business so that all employees prioritise security and collaboration, whatever their level and role. The natural result of this is that teams will become more open and better at information sharing which will make it easier to spot when there has been a cyber security issue, as everyone will know what is and isn’t normal across the company.


Why SAFe Hurts

Many transformations begin somewhere after the first turn on the Implementation Roadmap. Agile coaches will often engage after someone has, with the best of intentions, decided to launch an Agile Release Train (ART), but hasn’t understood how to do so successfully. As a result, the first Program Increment, and SAFe, will feel painful. Have you ever seen an ART that is full of handoffs and is unable to deliver anything of value? This pattern emerges when an ART is launched within an existing organizational silo, instead of being organized around the flow of value. When ARTs are launched in this way, the same problems that have existed in the organization for years become more evident and more painful. For this reason, many Agile and SAFe implementations face a reboot at some point. Feeling the pain, an Agile coach will help leaders understand why they’re not getting the expected results. Here’s where organizations will reconsider the first straight of the Implementation Roadmap, find time for training, and re-launch their ARTs. This usually happens after going through a Value Stream Identification and ART Identification workshop to best understand how to organize so that ARTs are able to deliver value.


4 benefits of modernizing enterprise applications

Legacy applications often based on monolithic architecture, are not only difficult but also costly to update and maintain since the entire application’s components are bucketed together. Even if updated, it could result in integration complexities resulting in wastage of time and resources. By modernizing an application to a microservices architecture, components are smaller, loosely coupled, and can be deployed and scaled independently. With cloud architecture growing more complex, IT services providers are expected to deliver the best combination of services to fulfil customer needs. They need to conduct a deep-dive analysis into the customer’s business, technical costs and contribution of each application in the IT portfolio. Based on the functional and technical value that each application brings along, the IT services provider should recommend the most compatible modernization and migration approach. In the process, they will enable rationalization and optimization of the customer’s IT landscape. Thus, only a handful of applications which are both functionally and technically relevant are chosen for migration to the cloud. 


Unlocking the Potential of Blockchain Technology: Decentralized, Secure, and Scalable

Some blockchains select users to add and validate the next block by having them devote computing power to solving cryptographic riddles. That approach has been criticized for being inefficient and energy intensive. Other blockchains give users holding the associated cryptocurrency power to validate new blocks on behalf of everyone else. That approach has been criticized for being too centralized, as relatively few people hold the majority of many cryptocurrencies. Algorand also relies on an associated cryptocurrency to validate new blocks. The company calls the currency Algo coins. Rather than giving the power to validate new blocks to the people with the most coins, however, Algorand has owners of 1,000 tokens out of the 10 billion in circulation randomly select themselves to validate the next block. The tokens are selected in a microsecond-long process that requires relatively little computing power. The random selection also makes the blockchain more secure by giving no clear target to hackers, helping Algorand solve the “trilemma” put forth by the Ethereum founder with a scalable, secure, and decentralized blockchain.


4 Skills Will Set Apart Tomorrow’s Data Scientists

While data science is complex, the widening data literacy gap threatens innovation and collaboration across teams. Accenture found that 75% of employees read data, yet only 21% “are confident with their data literacy skills.” While organizations should invest in data literacy across the entire organization to boost productivity, today’s data scientists should learn how to best communicate the fundamentals behind data. The ability to explain different concepts like variance, standard deviation, and distributions will help data scientists explain how data was collected, what the set of data reveals about that data, and whether it appears valid. These insights are helpful when communicating data to other stakeholders, especially the C-suite. ... The best data scientists are also adept storytellers, providing the necessary context about data sets and explaining why the data is important in the larger picture. When sharing a new set of data or the results of a data project, focus on crafting a narrative on the top three things the audience should walk away with. Reiterate these points throughout whatever medium you choose --- presentation, email, interactive report, etc. -- to move your audience to action.


How to migrate Java workloads to containers: 3 considerations

Containerization and orchestration also overlap two other key trends right now: Application migration and application modernization. Migration typically refers to moving workloads from one environment (today, usually a traditional datacenter) to another (usually a cloud platform.) Modernization, often used as an umbrella term, refers to the various methods of migrating applications to a cloud environment. These run the gamut from leaving the code largely as-is to a significant (or total) overhaul in order to optimize a workload for cloud-native technologies. Red Hat technology evangelist Gordon Haff notes that this spectrum bears out in Red Hat’s enterprise open source research: The 2020 report found a healthy mix of strategies for managing legacy applications, including “leave as-is” (31 percent), “update or modernize” (17 percent), “re-architect as cloud-enabled (16 percent), and “re-architect as cloud-native” (14 percent). “Leave as-is” is self-explanatory. “To whatever large degree containers are the future, you don’t need to move everything there just because,” Haff says. But as Red Hat’s research and other industry data reflect, many organizations are indeed containerizing some workloads. Containers can play a role in any of the other three approaches to legacy applications.


A sector under siege: how the utilities industry can win the war against ransomware

Although the utilities sector is typically resilient to unexpected macro changes, current market conditions have forced every industry to drastically accelerate digital transformation plans. The utilities sector is no exception, which perhaps indicates the sudden shift to the cloud. As a result of this rapid transformation, however, 55% of utilities companies admit that their security measures haven’t kept up with the complexity of their IT infrastructure, meaning they have less visibility and control of their data than ever before. This ‘chink in the armour’ could be their downfall when an attacker strikes. So, it comes as no surprise that there are lingering concerns around cloud security for two-thirds (67%) of utilities companies. Other apprehensions around cloud adoption include reduced data visibility (59%) or risk of downtime (55%). These are valid concerns given that two-thirds (64%) of utilities sector companies admit that their organisation’s approach to dealing with cyber attacks could be improved, while increasing resiliency to ransomware and data governance are among their top three priorities.


Danske Bank’s 360° DevSecOps Evolution at a Glance

The main enablers for this evolution are several actually, and a combination of internal and external factors. From an internal perspective, firstly it was our desire to further engineer and in some cases reverse-engineer our SDLC. We wanted to increase release velocity and shorten time to market, improve developer productivity, improve capabilities interoperability and modernize our technological landscape; in addition, find a pragmatic equilibrium between speed, reliability and security, while shifting the latter two as left as sensible into the SDLC. Last but not least, we wanted to make the silos (business to IT and IT to IT) across the internal DevOps ecosystem collapse by improving collaboration, communication and a coalition on our DevOps vision. From an external perspective, of course competition is a factor. In an era of digital disruption, time to market and reliability of services and products are vital. DevOps practices foster improvements on those quality aspects. In addition, we as a bank are heavily regulated with several requirements and guidelines by the FSA and EBA. 



Quote for the day:

"Leadership is the creation of an environment in which others are able to self-actualize in the process of completing the job." -- John Mellecker

Daily Tech Digest - June 27, 2021

Sparking the next cycle of IT spending

The past cycles were driven by vendors, by a shift in technology that created a new productivity paradigm. In fact, we could fairly say that every past cycle was driven by a single vendor, IBM. They popularized and drove commercial computing in the first cyclic wave. Their transaction processing capability (CICS, for those interested in the nitty gritty) launched the second wave, and their PC the third. Can IBM, or any vendor, pull together the pieces we need? These days, for something like contextual computing, we’re probably reliant on open-source software initiatives, and I’m a believer in them, with a qualification. The best open-source contextual computing solution wouldn’t likely be developed by a community. It would be done by a single player and supported and enhanced by a community. Open-source simplifies the challenge that being the seminal vendor for contextual computing would pose, but some vendor is still going to have to build the framework and make that initial submission. Otherwise we’ll spend years and won’t accomplish anything in the end.


Hackers Crack Pirated Games with Cryptojacking Malware

Dubbed “Crackonosh,” the malware — which has been active since June 2018 — lurks in pirated versions of Grand Theft Auto V, NBA 2K19 and Pro Evolution Soccer 2018 that gamers can download free in forums, according to a report posted online Thursday by researchers at Avast. The name means “mountain spirit” in Czech folklore, a reference to the researchers’ belief that the creators of the malware are from the Czech Republic. Cracked software is a version of commercial software that is often offered for free but often with a catch — the code of the software has been tampered with, typically to insert malware or for some other purpose beneficial to whoever cracked it. In the case of Crackonosh, the aim is to install the coinminer XMRig to mine Monero cryptocurrency from within the cracked software downloaded to an affected device, according to the report. So far, threat actors have reaped more than $2 million, or 9000 XMR in total, from the campaign, researchers said. Crackonosh also appears to be spreading fast, affecting 222,000 unique devices in more than a dozen countries since December 2020. As of May, the malware was still getting about 1,000 hits a day, according to the report.


Cryptocurrency Blockchains Don’t Need To Be Energy Intensive

Blockchain is a generic term for the way most cryptocurrencies record and share their transactions. It’s a type of distributed ledger that parcels up those transactions into chunks called “blocks” and then chains them together cryptographically in a way that makes it incredibly difficult to go back and edit older blocks. How often a new block is made and how much data it contains depends on the implementation. For Bitcoin, that time frame is 10 minutes; for some cryptocurrencies it’s less than a minute. Unlike most ledgers, which rely on a central authority to update records, blockchains are maintained by a decentralized network of volunteers. The ledger is shared publicly, and the responsibility for validating transactions and updating records is shared by the users. That means blockchains need a simple way for users to reach agreement on changes to the ledger, to ensure everyone’s copy of the ledger looks the same and to prevent fraudulent activity. These are known as consensus mechanisms, and they vary between blockchains.


Quantum computers just took on another big challenge.

Classical computers can only offer simplifications and approximations. The Japanese company, therefore, decided to try its hand at quantum technologies, and announced a partnership with quantum software firm CQC last year. "Scheduling at our steel plants is one of the biggest logistical challenges we face, and we are always looking for ways to streamline and improve operations in this area," said Koji Hirano, chief researcher at Nippon Steel. Quantum computers rely on qubits – tiny particles that can take on a special, dual quantum state that enables them to carry out multiple calculations at once. This means, in principle, that the most complex problems that cannot be solved by classical computers in any realistic timeframe could one day be run on quantum computers in a matter of minutes. The technology is still in its infancy: quantum computers can currently only support very few qubits and are not capable of carrying out computations that are useful at a business's scale. Scientists, rather, are interested in demonstrating the theoretical value of the technology, to be prepared to tap into the potential of quantum computers once their development matures.


Why Agile Fails Because of Corporate Culture

In most companies, however, providing creative space and encouraging to fail aren't really core values, are they? Therefore, it's the creation of this awareness which is is a fundamental obstacle and the only real reason why agility fails because of corporate culture. The weirdest fact about all this: The failure of agility in the company is itself a failure from which companies learn nothing. Why? Because failure is not seen as an opportunity for improvement, but as failure. And so it goes on and on; a vicious circle from which companies can only break out through a culture change. This change requires certain skills from all those involved. For some, these skills have yet to be awakened; for others, they're probably present and may only just need to be fostered. But all of those involved will need to be coached, on a long-term basis. Becoming agile is an evolutionary process and it can't just be avoided or saved-upon by taking advantage of the wisdom of others or applying arbitrary agile frameworks from a consultant's shelve to be agile. And let me stress that: Evolution. That means it's ever-ongoing. It's not a one-off big bang thing and whoohoo you're agile. As a C-level guy or gal, as a board member or shareholder it will take loads of guts to go that way.


Ag tech is working to improve farming with the help of AI, IoT, computer vision and more

Harvesting is a great example of AI on the farm. A combine is used to separate grain from the rest of the plant without causing damage to corn kernels. "That separation process is not perfect, and it's impossible for a farmer to see every kernel as it makes its way through the different sections of the machine," Bonefas explained. "AI enables the machine to monitor the separation process and to make decisions based on what it's seeing. If the harvest quality degrades, the AI-enabled system automatically optimizes the combine's settings or recommends new settings to the farmer to achieve more favorable results." In other farming operations, AI uses computer vision and machine learning to detect weeds and precisely spray herbicides only where needed. This results in significant cost reductions for farmers, who in the past used sprays across entire fields, even in areas where sprays were not needed. The weed detection technology works because it integrates AI with IoT devices such as cameras that can take good pictures regardless of weather conditions. Computers process imagery from the cameras and use sophisticated machine learning algorithms that detect the presence of weeds and actuate the required nozzles on sprayers at the exact moment needed to spray the weeds.


Architecture is theory

For organisation versus enterprise, one of the common assumptions is that the two terms are synonyms for each other: that the organisation is the enterprise, that the enterprise is the organisation. In reality, ‘organisation’ and ‘enterprise’ denote two different scope-entities (in essence, ‘how’ versus ‘why’) whose boundaries may coincide – but that special-case is basically useless and dangerously misleading, because it represents a context in which the sole reason for the organisation’s existence is to talk only with itself. The simplest summary here is that the enterprise represents the context, purpose and guiding-story for the respective organisation (or ‘service-in-focus’, to use a more appropriate context-neutral term): we develop an architecture for the organisation, about the enterprise that provides its context. In practice, the scope of enterprise we’d typically need to explore for an enterprise-architecture would be three steps ‘larger’ than the scope for the organisation – the organisation plus its transactions, direct-interactions and indirect-interactions – and also looking ‘inward’ to perhaps the same depth:


Cisco ASA Bug Now Actively Exploited as PoC Drops

In-the-wild XSS attacks have commenced against the security appliance (CVE-2020-3580), as researchers publish exploit code on Twitter. Researchers have dropped a proof-of-concept (PoC) exploit on Twitter for a known cross-site scripting (XSS) vulnerability in the Cisco Adaptive Security Appliance (ASA). The move comes as reports surface of in-the-wild exploitation of the bug. Researchers at Positive Technologies published the PoC for the bug (CVE-2020-3580) on Thursday. One of the researchers there, Mikhail Klyuchnikov, noted that there were a heap of researchers now chasing after an exploit for the bug, which he termed “low-hanging” fruit. ... “Researchers often develop PoCs before reporting a vulnerability to a developer and publishing them allows other researchers to both check their work and potentially dig further and discover other issues,” Claire Tills, senior research engineer at Tenable, told Threatpost. “PoCs can also be used by defenders to develop detections for vulnerabilities. Unfortunately, giving that valuable information to defenders means it can also end up in the hands of attackers.”


How edge will affect enterprise architecture: Aruba explains

Logan reckons IoT at edge growth will be significant as enterprise organizations are now starting to look at the network as a far more important component than they did four or five years ago, “where it might [then] have just been four bars of Wi-Fi or connectivity from branch to headquarters,” Logan said. New requirements on the architectures will be an end result of this shift. It will include data-intensive workloads caused by AI and so on. “We are going to find over the next 10 years that a significant amount of the data that is born at the edge and the experiences that are delivered at the edge need a local presence of computer and communications,” Logan added. As far as enterprise architecture evolving, he uses the example of a healthcare environment, such as a hospital: Patient telemetry has to be collected from the bedside. But “what if the point of patient care is in the patient’s home?” he asked. This is a realistic proposition, as we’ve seen during the pandemic with the escalation of remote doctor care. “That’s a completely different set of circumstances, physically and logically from an enterprise architecture perspective,” Logan said.


Integrating machine learning and blockchain to develop a system to veto the forgeries and provide efficient results in education sector

The advancement of blockchain technology in terms of validation and data security has also been applied in the educational sector, with the validation and security of student data being considered prime aspects. Various systems providing such validation and security have been developed. For example, Li and Wu proposed the idea of flashing the system, through which the counterfeiting of student degrees can be avoided. Such a program has to some degree remedied the defects found in current solutions, making the use of a blockchain-based certificate a more feasible theory. Many types of certificates of success, grades, and diplomas, among other documents, can become a valuable tool in finding a new school or job. Gopal and Prakash proposed a blockchain-based digital certificate scheme based on the immutable properties of blockchain. This scheme preserves the essential data and eliminates the chance of any company with a job offering distrusting a student’s certification. Individual learning records are important for the professional careers of individuals. Certificates support the achievement of learning outcomes in education.



Quote for the day:

"Energy and persistence conquer all things." -- Benjamin Franklin