Daily Tech Digest - November 05, 2020

Deep Neural Networks Help to Explain Living Brains

Artificial neural networks are built with interconnecting components called perceptrons, which are simplified digital models of biological neurons. The networks have at least two layers of perceptrons, one for the input layer and one for the output. Sandwich one or more “hidden” layers between the input and the output and you get a “deep” neural network; the greater the number of hidden layers, the deeper the network. Deep nets can be trained to pick out patterns in data, such as patterns representing the images of cats or dogs. Training involves using an algorithm to iteratively adjust the strength of the connections between the perceptrons, so that the network learns to associate a given input (the pixels of an image) with the correct label (cat or dog). Once trained, the deep net should ideally be able to classify an input it hasn’t seen before. In their general structure and function, deep nets aspire loosely to emulate brains, in which the adjusted strengths of connections between neurons reflect learned associations. Neuroscientists have often pointed out important limitations in that comparison: Individual neurons may process information more extensively than “dumb” perceptrons do, for example, and deep nets frequently depend on a kind of communication between perceptrons called back-propagation that does not seem to occur in nervous systems.


Future of Corporate Governance Through Blockchain-powered Smart Companies

In essence, Smart Company is an entirely new form of business type (LTD., IBC) except it rivals all traditional models by being fully automated by blockchain. And certainly, it makes just that big of a difference. When you have the ability to run your business in a structure that is legally compliant yet all its transactions happen in real-time and are verified directly on the blockchain, this changes the game. What this means for business owners is that managerial ownership structures become more transparent. Corporate voting is easier, more accurate and secret strategies such as ‘empty voting’ become more difficult to execute. The ability to have corporate shares as ERC-20 tokens modified for security laws offers the means to assert and transfer ownership and liabilities of real-world assets with actual value. Just to give you a rough understanding of the magnitude of this untapped potential, it has been estimated that the total value of illiquid assets, including real estate, gold., is no less than $11 Trillion . Roughly the nominal GDP of China, the world’s second largest economy today. For shareholders, Smart Company model offers nearly free trading and transparency in ownership records while simultaneously showing real-time transfers of shares from one owner to another. 



Agile development: How to tackle complexity and get stuff done

Holt believes his key role as CTO is to create a culture in the organisation where his people feel comfortable and confident to try new things. Rather than being scared of risk-taking, he says tech leaders should encourage their IT professionals to innovate and develop customer-centred products and services in an iterative manner. "Those are the kind of people who aren't afraid of the complexity, who are able get in amongst it, and that's where you get really good solutions," he says. Holt says engaging with a challenge involes great teamwork. He says his organisation is always on the lookout for people who have an ability to manage complexity and the solution often involves agility in organisational culture as well as product development. ... Danny Attias, chief digital and information officer at British charity Anthony Nolan, says tech executives looking to deal with complexity must ensure they're working to create a joined-up organisation. More often than not, that means using Agile principles to break down problems into small parts that can be managed effectively across the organisation. "My career has been about decoupling dependencies wherever you possibly can," he says.


The world needs women who code

A lot of women are not aware of the power of IT. The industry’s reputation as a boy’s club belies the fact that women are actually rising in many technology fields, both in number and in title. They may think they have to already know a bunch of code to get started. It's likely that many women simply don’t realize how much opportunity there is for them, even as beginners. A slightly different, yet related, reason is fear. Because of the percentage of men in this field, some women may feel that there will be too much competition, that they won’t be able to measure up against men with experience, or that they'll be overlooked for men without experience. But nowadays, IT companies are making strong efforts to welcome and support women, conducting various programs to encourage women to learn about various tech disciplines, and provide pathways for them to join the industry. And whenever a woman joins this industry, it gives a boost of confidence to other women too. I constantly get inspired by the many women I know that are doing amazing things in tech. ... Admittedly, coding can seem overwhelming in the beginning, but don’t worry—it’s like that for almost everyone. Soon enough, what seems like gibberish at first starts to come together, and you learn to harness it to make things work and accomplish tasks. 


Kafka at the Edge — Use Cases and Architectures

Event streaming with Apache Kafka at the edge is not cutting edge anymore. It is a common approach to providing the same open, flexible, and scalable architecture at the edge as in the cloud or data center. Possible locations for a Kafka edge deployment include retail stores, cell towers, trains, small factories, restaurants, etc. I already discussed the concepts and architectures in detail in the past: "Apache Kafka is the New Black at the Edge" and "Architecture patterns for distributed, hybrid, edge and global Apache Kafka deployments". This blog post is an add-on focusing on use cases across industries for Kafka at the edge. To be clear before you read on: Edge is NOT a data center. And "Edge Kafka" is not simply yet another IoT project using Kafka in a remote location. Edge Kafka is actually an essential component of a streaming nervous system that spans IoT (or OT in Industrial IoT) and non-IoT (traditional data-center/cloud infrastructures). The post's focus is scenarios where the Kafka clients AND the Kafka brokers are running on the edge. This enables edge processing, integration, decoupling, low latency, and cost-efficient data processing. Some IoT projects are built like “normal Kafka projects”, i.e., built in the (edge) data center or cloud. 


How smartphones became IoT’s best friend and worst enemy

Relying on the ubiquity of smartphones and the rise of remote controls, users and vendors alike have embraced the move away from physical device interfaces. This evolution in the IoT ecosystem, however, brings major benefits AND serious drawbacks. While users enjoy the remote capabilities of companion apps and vendors bypass the need for hardware interfaces, studies show that they present serious cybersecurity risks. For example, the communication between an IoT device and its app is often not properly encrypted nor authenticated – and these issues enable the construction of exploits to achieve remote control of victim’s devices. It is important to explain that connected devices have not always been this way. I’m sure others like myself do not need to cast their minds far back to remember a time when smartphones did not even exist. User input during these halcyon days relied on physical interfaces on the device itself, interfaces that typically consisted of basic touch screens or two-line LCD displays. Though functional, these physical interfaces were certainly limited (and limiting) when compared to the applications that superseded them. Devices without physical interfaces are smaller, consume less power, and look better. 


Singapore government rolls out digital signature service

Called Sign with SingPass, the service is being rolled out by Assurity, a subsidiary of the Government Technology Agency (GovTech), together with eight digital signing application providers, including DocuSign, Adobe and Kofax. GovTech said each digital signature is identifiable and cryptographically linked to the signer, while signed documents are platform agnostic and can be viewed with the user’s preferred system. No document data will be transferred during the digital signing process. Assurity will also issue digital certificates for signatures created under the service. Upon Assurity’s accreditation under Singapore’s Electronic Transactions Act, signatures made with the service will be regarded as secure electronic signatures. GovTech said the service will be useful for organisations and their customers amid the growing number of online transactions and will test the service with the Singapore Land Authority (SLA) for the digital signing of property caveats in the coming weeks. Kok Ping Soon, chief executive of GovTech, said the high security document signing service will help businesses save cost and manpower by alleviating the need to manually verify physical paperwork.


Is your approach to data protection more expensive than useful?

With the recent increase in cyberattacks and exponential data growth, protecting data has become job one for many IT organizations. And in many cases, their biggest hurdle is managing an aging backup infrastructure with limited resources. Tight budgets should not discourage business leaders from modernizing data protection. Organizations that hang on to older backup technology don't have the tools they need to face today's threats. Rigid, siloed infrastructures aren't agile or scalable enough to keep up with fluctuations in data requirements, and they are based on an equally rigid backup approach. Traditional backup systems behave like insurance policies, locking data away until it's needed. That's like keeping an extra car battery in the garage, waiting for a possible crisis. The backup battery might seem like a reasonable preventive measure, but most of the time, it's a waste of space. And if the crisis never arises, it's an unnecessary upfront investment, making it more expensive than useful. In the age of COVID-19, where cash is king and on-site resources are particularly limited, some IT departments are postponing data protection modernization, looking to simplify overall operations and lower infrastructure costs first. That plan can block a company's progress. 


Taking Control of Confusing Cloud Costs

It’s difficult to compare services across multiple clouds, because each provider uses different terminology. What Azure calls a ‘virtual machine’ is called a ‘virtual machine instance’ on GCP and just an ‘instance’ on AWS. A group of these instances would be called ‘autoscaling groups’ on both Amazon and GCP, but Scale Sets on Azure. It’s hard to even keep up with what it is you’re purchasing and whether there is even an alternative cloud comparable service, as the naming convention is different. As outlined above in regards to the simple web application using Lambda, it would be very time consuming for someone to compare what it would cost to host a web application in one cloud versus another. It would take technical knowledge of each cloud provider to be able to translate how you could comparably host it with one set of services against another before you even got into prices. Cloud pricing uses an on-demand model, which is a far cry from on-prem, where you could deploy things and leave them running 24/7 without affecting the cost (bar energy). In the cloud, everything is based on the amount of time you use it, either on a per hour, per minute, per request, per amount or per second basis.


Five ways to avoid digital transformation fatigue

Change fatigue stems from uncertainty and a lack of clarity around the strategic intent and implementation of the program. Too often, digitalisation and new tools are being taken into the company without proper project planning and thinking about how the benefits will be explained to the employees. Have a deep-dive into the thinking of the value proposition narrative before the new digital tool is implemented. Start by finding out if the management and leadership teams are aligned on the transformation's strategic intent and outcomes. If not, then you need to go back to the drawing board. This should ideally map out clear target business outcomes as well as the impact of the transformation to the people, processes, and tools of what’s happening and how it will affect them. Many workers might feel that they should be doing their 'actual job' instead of learning how to navigate with something that they are not sure will benefit them. Be ready to present to each role the necessities of the new tool, and avoid explaining it so that it sounds like the company is the only one that will benefit from it. Incentives for the employees should be clearly stated before the change starts.



Quote for the day:

"Don't just see what others do to you. Also see what you do to others." -- The Golden Mirror

Daily Tech Digest - November 04, 2020

Reworking the Taxonomy for Richer Risk Assessments

With pre-assessment and planning, you need to think about the desired outcome (i.e., identify the risks to the facility) and identify the necessary actions to mitigate or eliminate the risks and associated vulnerabilities. The flow chart above is a detailed view of this phase and includes collecting and digesting documents, identifying the team members and the necessary skill sets, and getting ready for travel. Of course, contacting the "customer" and setting up the necessary on-site logistics are important. ... Don't forget these threats and vulnerabilities can be cyber or physical. They can also be part of the site management and culture. What about training or lack thereof? They can all contribute to the risk profile of the facility. The graphic above offers some elements of the on-site activities. You can see that we have inspections, observations, taking photographs, and looking at the site network and architecture. Even a cyber-vulnerability scan may be part of the site assessment. These activities are intended to be part of the site assessment plan. However, don't let the plan place barriers on your site risk reviews. Feel free to follow leads and evidence of problems, since that is why you are on-site rather than doing a remote risk assessment via Zoom.


How blockchain is set to revolutionize the healthcare sector

Despite its potential, data portability across multiple systems and services is a real issue. There is nothing more valuable and personal to an individual than their personal medical records, so making data shareable across services will inevitably raise concerns around the spectre of data being misused. Currently, data does not flow seamlessly across technology solutions within healthcare. For example, in the UK your hospital records do not form part of your GP records, but the advantages are clear in terms of treatment and preventative care were they to do so. Unfortunately, it is not likely a centralised storage and delivery system will get traction until there is one that can ensure the appropriate encryption and security. The risks are simply too high. Yet, it is an issue that a technology like blockchain can tackle. This is because the purpose of the chain is to store a series of transactions in a way that cannot be altered or changed. What renders it immutable is the combination of two opposing things: the cryptography and its openness. Each transaction is signed with a private key and then distributed amongst a peer to peer set of participants. Without a valid signature, new blocks created by data changes are ignored and not added to the chain. 


UX Patterns: Stale-While-Revalidate

Stale-while-revalidate (SWR) caching strategies provide faster feedback to the user of web applications, while still allowing eventual consistency. Faster feedback reduces the necessity to show spinners and may result in better-perceived user experience. ... Developers may also usestale-while-revalidate strategies in single-page applications that make use of dynamic APIs. In such applications, oftentimes a large part of the application state comes from remotely stored data (the source of truth). As that remote data may be changed by other actors, fetching it anew on each request guarantees to always return the freshest data available. Stale-while-revalidate strategies substitute the requirement to always have the latest data for that of having the latest data eventually. The mechanism works in single-page applications in a similar way as in HTTP requests. The application sends a request to the API server endpoint for the first time, caches and returns the resulting response. The next time the application will make the same request, the cached response will be returned immediately, while simultaneously the request will proceed asynchronously. When the response is received, the cache is updated, with the appropriate changes to the UI taking place.


The Inevitable Rise of Intelligence in the Edge Ecosystem

Edge computing is becoming an integral part of the distributed computing model, says Nishith Pathak, global CTO for analytics and emerging technology with DXC Technology. He says there is ample opportunity to employ edge computing across industry verticals that require near real-time interactions. “Edge computing now mimics the public cloud,” Pathak says, in some ways offering localized versions of cloud capabilities regarding compute, the network, and storage. Benefits of edge-based computing include avoiding latency issues, he says, and anonymizing data so only relevant information moves to the cloud. This is possible because “a humungous amount of data” can be processed and analyzed by devices at the edge, Pathak says. This includes connected cars, smart cities, drones, wearables, and other internet of things applications that consume on demand compute. The population of devices and scope of infrastructure that support the edge are expected to accelerate, says Jeff Loucks, executive director of Deloitte’s center for technology, media and telecommunications. He says implementations of the new communications standard have exceeded initial predictions that there would be 100 private 5G network deployments by the end of 2020. “I think that’s going to be closer to 1,000,” he says.


Take a Dip into Windows Containers with OpenShift 4.6

Windows Operating System in a container? Who would have thought?!? If you asked me that question a few years back, I would have told you with conviction that it would never happen! But if you ask me now, I will answer you with a big, emphatic yes and even show you how to do so!In this article, I will demonstrate how you can run Windows workloads in OpenShift 4.6 by deploying a Windows container on a Windows worker node. In addition, I will then highlight some of the issues and challenges that I see from a system administrator perspective. ... For customers who have heterogeneous environments with a mix of Linux and Windows workloads, the announcement of a supported Windows container feature on OpenShift 4.6 is exciting news. As of this writing, the supported workloads to run on Windows containers can be either .NET core applications, traditional .NET framework applications, or other Windows applications that run on a Windows server. So when did the work start to make Windows containers possible to run on top of OpenShift? In 2018, Red Hat and Microsoft announced the joint engineering collaboration with the goal of bringing a supported Windows containers feature into OpenShift.


GPS and water don't mix. So scientists have found a new way to navigate under the sea

Underwater devices already exist, for example to be fitted on whales as trackers, but they typically act as sound emitters. The acoustic signals produced are intercepted by a receiver that in turn can figure out the origin of the sound. Such devices require batteries to function, which means that they need to be replaced regularly – and when it is a migrating whale wearing the tracker, that is no simple task. On the other hand, the UBL system developed by MIT's team reflects signals, rather than emits them. The technology builds on so-called piezoelectric materials, which produce a small electrical charge in response to vibrations. This electrical charge can be used by the device to reflect the vibration back to the direction from which it came. In the researchers' system, therefore, a transmitter sends sound waves through water towards a piezoelectric sensor. The acoustic signals, when they hit the device, trigger the material to store an electrical charge, which is then used to reflect a wave back to a receiver. Based on how long it takes for the sound wave to reflect off the sensor and return, the receiver can calculate the distance to the UBL.  "In contrast to traditional underwater acoustic communication systems, which require each sensor to generate its own signals, backscatter nodes communicate by simply reflecting acoustic signals in the environment," said the researchers.


Temporal Tackles Microservice Reliability Headaches

Temporal consists of a programming framework (or SDK) and a managed service (or backend). The core abstraction in Temporal is a fault-oblivious stateful Workflow with business logic expressed as code. The state of the Workflow code, including local variables and threads it creates, is immune to process and Temporal service failures. Temporal supports the programming languages Java and Go, but has SDKs in the works for Ruby, Python, Node.js, C#/.NET, Swift, Haskell, Rust, C++ and PHP. In the event of a failure while running a Workflow, state is fully restored to the line in the code where the failure occurred and the process continues without developer intervention. One of the restrictions on Workflow code, however, is that it must produce exactly the same result each time it is executed, which rules out external API calls. Those must be handled through what it calls Activities, which the Workflow orchestrates. An activity is a function or an object method in one of the supported languages, stored in task queues until an available worker invokes its implementation function. When the function returns, the worker reports its result to the Temporal service, which then reports to the Workflow about completion.


The Cybersecurity Myths We Hear Ourselves Saying

There is a widely held belief — including from 19% of respondents — that the brands you can trust won't take advantage of you and that they will protect your data, as they surely do everyone else's data. However, the reality is that almost all mainstream sites are collecting data about you, and if they're not profiting off that data themselves, then there is a very good chance that hackers are. The more sites you go to, even trusted ones, the more cookies that are held in your browser. What's more, by surfing to numerous sites, not only are you providing more data about yourself, but you're also providing more pools of data that are being held by the various sites you visit. Applying basic theories of probability, increasing the number of pools increases the probability that any one of them will be breached. The hard truth is that the only way to effectively ensure privacy is to disconnect from the internet. Failing that, another good way to protect data is by encrypting internet traffic history by using a VPN. A VPN adds an extra layer of encrypted protection to a secured Wi-Fi network, preventing corporate agents from tracking you while you're online.


Running React Applications at the Edge with Cloudflare Workers

Cloudflare Workers are a cool technology introduced by Cloudflare a couple of years ago. Normally, you might have a server living in a data center somewhere in the world. You’ll likely put a CDN in front of that to handle caching and manage the load. But imagine having the power of a server directly inside your CDN’s data center. This is what Cloudflare Workers offers —a way to execute code directly at the edge of the CDN. This is a really powerful way to manage and modify requests going to and from your origin server—but it also opens up a whole new set of possibilities: instead of paying for and managing your own server, you can use Cloudflare Workers as your origin. This means lightning-fast responses directly at the edge without a round trip to another data center. ... These patterns are what inspired Flareact. Cloudflare Workers offers a Workers Sites feature that allows you to host a static site on top of Cloudflare Workers, with assets stored in a KV [Key/Value] store at the edge. This, combined with the underlying Workers dynamic platform, seemed like the perfect use case for Next.js. However, due to technical constraints, it proved too difficult to get Next.js working on Cloudflare Workers. So I set out to build my own framework modeled after Next.js.


The future is female: overcoming the challenges of being a woman in tech

Self-doubt affects everyone, but being in an industry in which you are outnumbered by thopposite gender is particularly tough. According to TrustRadius, three out of four tech professionals have experienced imposter syndrome at work, but women are 22% more likely than men to feel this way. Sheryl Sandberg even said that women in tech “hold ourselves back in ways both big and small, by lacking self-confidence, by not raising our hands, and by pulling back when we should be leaning in.” This is unsurprising, as women are typically taught not to brag from an early age. Self marketing might feel egotistical and uncomfortable at first but it definitely feels more natural with practice! Confidence comes with knowledge; with technology constantly evolving as new software and systems are created, women making their way in tech should continue to learn as much as possible. Being on top of new developments will get you noticed and make it easier to advocate for yourself. But, if you don’t feel comfortable selling yourself, let others do this for you. Ask trusted clients, colleagues and contacts to give testimonials – many will be delighted to do so – and sing the praises of those around you, as people will return the favour.



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - November 03, 2020

Why Securing Secrets in Cloud and Container Environments Is Important – and How to Do It

In containerized environments, secrets auditing tools make it possible to recognize the presence of secrets within source code repositories, container images, across CI/CD pipelines, and beyond. Deploying container services will activate platform and orchestrator security measures that distribute, encrypt and properly manage secrets. By default, secrets are secured in system containers or services — and this protection suffices in most use cases. However, for especially sensitive workloads — and Uber’s customer database backend service is a strong example, as are any data encryption or standard image scanning use cases — it’s not adequate to simply rely on conventional secret store security and secret distribution. These sensitive use cases call for more robust defense in depth protections. Within container environments, defense-in-depth implementations leverage deep packet inspection (DPI) and data leakage prevention (DLP) to enable secrets monitoring while they’re being used. Any transmission of a secret via network packets can be recognized, flagged and blocked if inappropriate. In this way, the most sensitive data can be effectively secured throughout the full container lifecycle, and attacks that could otherwise result in breach incidents can be thwarted due to this additional layer of safeguards.


Large-Scale Multilingual AI Models from Google, Facebook, and Microsoft

While Google's and Microsoft's models are designed to be fine-tuned for NLP tasks such as question-answering, Facebook has focused on the problem of neural machine translation (NMT). Again, these models are often trained on publicly-available data, consisting of "parallel" texts in two different languages, and again the problem of low-resource languages is common. Most models therefore train on data where one of the languages is English, and although the resulting models can do a "zero-shot" translation between two non-English languages, often the quality of such translations is sub-par. To address this problem, Facebook's researchers first collected a dataset of parallel texts by mining Common Crawl data for "sentences that could be potential translations," mapping sentences into an embedding space using an existing deep-learning model called LASER and finding pairs of sentences from different languages with similar embedding values. The team trained a Transformer model of 15.4B parameters on this data. The resulting model can translate between 100 languages without "pivoting" through English, with performance comparable to dedicated bi-lingual models.


How to Prevent Pwned and Reused Passwords in Your Active Directory

There are many different types of dangerous passwords that can expose your organization to tremendous risk. One way that cybercriminals compromise environments is by making use of breached password data. This allows launching password spraying attacks on your environment. Password spraying involves trying only a few passwords against a large number of end-users. In a password spraying attack, cybercriminals will often use databases of breached passwords, a.k.a pwned passwords, to effectively try these passwords against user accounts in your environment. The philosophy here is that across many different organizations, users tend to think in very similar ways when it comes to creating passwords they can remember. Often passwords exposed in other breaches will be passwords that other users are using in totally different environments. This, of course, increases risk since any compromise of the password will expose not a single account but multiple accounts if used across different systems. Pwned passwords are dangerous and can expose your organization to the risks of compromise, ransomware, and data breach threats. What types of tools are available to help discover and mitigate these types of password risks in your environment?


The practice of DevOps

Continuous integration (CI) is the process that aligns the code and build phases in the DevOps pipeline. This is the process where new code is merged with the existing structure, and engineers ensure that everything is working fine. The developers who make frequent changes to the code, update the same on the shared central code repository. The repository starts with a master branch, which is a long-term stable branch. For every new feature, a new branch is created, and the developer regularly (daily) commits his code to this branch. After the development for a feature is complete, a pull request is created to the release branch. Similarly, a pull request is created to the master branch and the code is merged. We have seen slight variations to these practices across organisations. Sometimes the developers maintain a fork, or copy, of the central repository. This limits the merge issues to their own fork and isolates the central repository from the risk of corruption. Sometimes, the new branches don’t branch out from the feature branch but from the release or master branch. Small and mid-size companies often use open sites like GitHub for their code repository, while larger firms use Bitbucket as their code repository, which is not free.


The 5 Biggest Cloud Computing Trends In 2021

Currently, the big public cloud providers - Amazon, Microsoft, Google, and so on – take something of a walled garden approach to the services they provide. And why not? Their business model has involved promoting their platforms as one-stop-shops, covering all of an organization's cloud, data, and compute requirements. In practice, however, industry is increasingly turning to hybrid or multi-cloud environments (see below), with requirements for infrastructure to be deployed across multiple models. ... As far as cloud goes, AI is a key enabler of several ways in which we can expect technology to adapt to our needs throughout 2021. Cloud-based as-a-service platforms enable users on just about any budget and with any level of skill to access machine learning functions such as image recognition tools, language processing, and recommendation engines. Cloud will continue to allow these revolutionary toolsets to become more widely deployed by enterprises of all sizes and in all fields, leading to increased productivity and efficiency. ... Amazon most recently joined the ranks of tech giants and startups offering their own platform for cloud gaming. Just as with music and video streaming before it, cloud gaming promises to revolutionize the way we consume entertainment media by offering instant access to vast libraries of games that can be played for a monthly subscription.


Quantum computers are coming. Get ready for them to change everything

The challenge lies in building quantum computers that contain enough qubits for useful calculations to be carried out. Qubits are temperamental: they are error-prone, hard to control, and always on the verge of falling out of their quantum state. Typically, scientists have to encase quantum computers in extremely cold, large-scale refrigerators, just to make sure that qubits remain stable. That's impractical, to say the least. This is, in essence, why quantum computing is still in its infancy. Most quantum computers currently work with less than 100 qubits, and tech giants such as IBM and Google are racing to increase that number in order to build a meaningful quantum computer as early as possible. Recently, IBM ambitiously unveiled a roadmap to a million-qubit system, and said that it expects a fault-tolerant quantum computer to be an achievable goal during the next ten years. Although it's early days for quantum computing, there is still plenty of interest from businesses willing to experiment with what could prove to be a significant development. "Multiple companies are conducting learning experiments to help quantum computing move from the experimentation phase to commercial use at scale," Ivan Ostojic, partner at consultant McKinsey, tells ZDNet.


How enterprise architects and software architects can better collaborate

After the EAs have developed the enterprise architecture map, they should share these plans with software architects across each solution, application, or system. After all, it’s important that the software architect who works most closely with the solution shares their own insight and clarity with the enterprise architecture, who is more concerned with high-level architecture. Software architects and EAs can collaborate and suggest changes or improvements based on the existing architecture. Software architects can then go in and map out the new architecture based on the business requirements. Non-technical leaders can gain a better understanding and this can lead to quicker alignment. Software architects can evaluate the quality of architecture and share their learnings with the enterprise architect, who can then incorporate findings into the enterprise architecture. Software architects are also largely interested in standardization and they can help enterprise architects scale that standardization across the business. Once the EA has developed a full model or map, it’s easier to see where assets can be reused. Software architects can recommend standardization and innovation and weigh in on the EA’s suggestions of optimizing enterprise resources.


Dealing with Psychopaths and Narcissists during Agile Change

Many of the techniques or practices you use with healthy people do not work well with psychopaths or narcissists. For example, if you are using the Scrum framework, it is very risky to include a toxic person as part of a retrospective meeting. Countless consultants also believe that coaching works with most folks. However, the psychopathic person normally ends up learning the coach’s tools and manipulating him or her for their own purpose. This obviously aggravates the problem. ... From an organizational point of view, these toxic people are excellent professionals, because they look like they perform almost any task successfully. This helps a company to “tick off” necessary accomplishments in the short-term to increase agile maturity, managers to get their bonuses, and the psychopath to obtain greater prestige. Obviously, these things are not sustainable, and what seems to be agility is transformed in the medium term into fragility and loss of resilience. Agile also requires—apart from good organizational health—execution with purpose and visions and goals that involve feelings and inspire people to move forward.


Responsible technology: how can the tech industry become more ethical?

Three priorities right now should be crafting smart regulations, increasing the diversity of thought in the tech industry (i.e. adding more social scientists), and bridging the gulf between those that develop the technology and those that are most impacted by its deployment. If we truly want to impact behavior, smart regulations can be an effective tool. Right now, there is often a tension between “being ethical” and being successful from a business perspective. In particular, social media platforms typically rely on an ad-based business model where the interests of advertisers can run counter to the interests of users. Adding diverse thinkers to the tech industry is important because “tech” should not be confined to technologists. What we are developing and deploying is impacting people, which heightens the need to incorporate disciplines that naturally understand human behavior. By bridging the gulf between the individuals developing and deploying technology and those impacted by it, we can better align our technology with our individual and societal needs. Facial recognition is a prime example of a technology that is being deployed faster than our ability to understand its impact on communities.


Enterprise architecture has only begun to tap the power of digital platforms

The problem that enterprises have been encountering, Ross, says, is getting hung up at stage three. “We observed massive failures in business transformations that frankly were lasting six, eight, 10 years. It’s so hard because it’s an exercise in reductionism, in tight focus,” she said. “We recommend that companies zero in on their single most important data. this is the packaged data you keep – the customer data, the supply chain data… this is the thing that matters most. If they get this right, then things will take off.” The challenge is now moving past this stage, as in the fourth stage, “we actually understand now that what’s starting to happen is we can start to componentize our business,” says Ross. She estimates that only about seven percent of companies have reached this stage. “This is not just about plugging modules into this platform, this is about recognizing that any product or process can be decomposed into people, process and technology bundles. And we can assign individual teams or even individuals’ accountability for one manageable piece that that team can keep up to date, improve with new technology, and respond to customer demand.”



Quote for the day:

"Let him who would be moved to convince others, be first moved to convince himself." -- Thomas Carlyle

Daily Tech Digest - November 02, 2020

India needs IoT security standards

The bigger issue is most of the sectors using digital technologies or integrating emerging technologies do not have a digital risk element defined by the sectoral regulators till date. A lack of National cyber strategy highlighting the key risk to these sectors is still awaiting cabinet nod. Hence, fighting ransomware, advanced persistent threats and malware is becoming tough for the industry, which doesn’t have a framework to rely upon to test or audit their systems. Earlier this year, the European body, ETSI, released consumer IoT security standard. The standard specifies high-level security and data protection provisions for consumer IoT devices which includes IoT gateways, base stations and hubs, smart cameras, TV, smart washing machines, wearables, health trackers, home automation systems, connected gateways, refrigerators, door lock and window sensors. This standard provides a minimum baseline for securing devices and sets provisions for consumer IoT. It lays the foundation for setting strong password controls for IoT devices by stating all consumer IoT device passwords must be unique. In India, and across the world, we see consumer IoT devices getting sold with universal default usernames and passwords.


How To Build Your Own Chatbot Using Deep Learning

Before jumping into the coding section, first, we need to understand some design concepts. Since we are going to develop a deep learning based model, we need data to train our model. But we are not going to gather or download any large dataset since this is a simple chatbot. We can just create our own dataset in order to train the model. To create this dataset, we need to understand what are the intents that we are going to train. An “intent” is the intention of the user interacting with a chatbot or the intention behind each message that the chatbot receives from a particular user. According to the domain that you are developing a chatbot solution, these intents may vary from one chatbot solution to another. Therefore it is important to understand the right intents for your chatbot with relevance to the domain that you are going to work with. Then why it needs to define these intents? That’s a very important point to understand. In order to answer questions, search from domain knowledge base and perform various other tasks to continue conversations with the user, your chatbot really needs to understand what the users say or what they intend to do. That’s why your chatbot needs to understand intents behind the user messages (to identify user’s intention).


Finnish government rolls out digital projects to support SMEs

Finland’s plan aims to use the EDIH-model to strengthen the digital capabilities of companies across the country. The central strategy is based on helping Finnish SME enterprises to easily access, exploit and profit from the more extensive use of business impacting technologies such as artificial intelligence (AI), robotics and high-performance computing. The design constriction of the Finnish NDIH plan means it can link directly in to the EU’s EDIH knowledge base and network. The prospective Finnish hubs will have the built-in ability to accelerate the digital transformation not just in Finland but also on a wider EU level. Earlier this year, the Finnish government rolled-out an open survey scheme to measure interest from the country’s technology sector in these network projects. The survey was designed to help determine which Finnish tech actors may be qualified to join the hubs. The results from the information survey will amplify the Finnish government ministry’s ability to develop a national framework. Moreover, it will allow it to organise and complete an application round for Finland’s candidates to the Eurpean-wide hub by year-end 2020.


Data Privacy is a Brand Reputation Issue, Not a Compliance Issue

First, establish a privacy policy that is legal and ethical. Then, communicate the privacy and data utilization policy clearly without all the obfuscating legalese. Demonstrate that consumer data will be collected only by honest and clear, not coercive, or deceptive consent. Next, create personal data pods for consumers on their own devices, so they can download and control their data easily in a usable, structured format instantly. Ask consumers if the brand can use non-invasive Edge AI tools to provide them with learnings and key insights into themselves that improve their lives without violating their desires or values. Use those insights to generate predictions and recommendations that enhance the consumer’s life, that serve their best interests, and are delivered with emotionally intelligent humanity. Continuously seek, and embrace, candid consumer feedback on the actual value being created. Then, as real value is generated for the consumer using their available data, and the consumer’s trust increases, open up an honest and fiduciary dialogue about what additional data is needed in order to create even more personalized value. As more and more data is accumulated from the more trusting consumers, such as location, browsing, and other real-time data, reach out to the hesitant consumers and share use cases.


What Are The Fastest Growing Cybersecurity Skills In 2021?

Cybersecurity professionals with cloud security skills can gain a $15,025 salary premium by capitalizing on strong market demand for their skills in 2021. DevOps and Application Development Security professionals can expect to earn a $12,266 salary premium based on their unique, in-demand skills. 413,687 job postings for Health Information Security professionals were posted between October 2019 to September 2020, leading all skill areas in demand. Cybersecurity's fastest-growing skill areas reflect the high priority organizations place on building secure digital infrastructures that can scale. Application Development Security and Cloud Security are far and away from the fastest-growing skill areas in cybersecurity, with projected 5-year growth of 164% and 115%, respectively. This underscores the shift from retroactive security strategies to proactive security strategies. According to The U.S. Bureau of Labor Statistics' Information Security Analyst's Outlook, cybersecurity jobs are among the fastest-growing career areas nationally. The BLS predicts cybersecurity jobs will grow 31% through 2029, over seven times faster than the national average job growth of 4%.


Q&A on the Book Accelerating Software Quality

There are multiple challenges that can be divided across test automation creation and maintenance, test reporting and analysis, test management, testing trends, and debugging. Traditional tools are not efficient enough to provide practitioners with reliable, robust, and maintainable test scripts. Test automation scripts keep breaking upon developers’ code changes made to the apps, or elements on the app that aren’t properly recognized by the test automation framework. Ongoing maintenance of scripts is also a challenge that causes lots of false negatives and noise that drills into the CI pipeline. As test execution scales, large test data accumulates and needs to be sliced and diced to find the most relevant issues. Here, traditional tools are limited in filtering big test data and providing data-driven smart decisions, trends, root cause of failures, and more. Lastly, the time it takes to create a new script that is code based, and debug it, is way too long to fit into today’s aggressive timelines. Hence, AI and ML are in a great position to close this gap by automatically generating test code and maintaining it through self-healing methods.


How three simple steps will help you save on your cloud spending

The first crucial step is to get hold of the data in a format that will help you understand how money is being spent. This will allow you to put guard rails up to protect against overspending. To do this, you’ll need to utilise tools that break down the usage data. Otherwise, you will just receive a bill that looks like a long shopping list – that is almost impossible to decipher. AWS does offer tools that can help here, but you can also add third party solutions like CloudCheckr to collate this information and play it back to you, with actionable metrics. This will also alert you to any under used resources that could be turned off. You will also want to ensure you implement a consistent tagging strategy across all of your cloud estate. This will allow you to break spending down to the individual customer, department, team and even developer. It will also help you understand where you’re getting a good return on investment and where you might need to be more efficient. It’s essential that you go native in public cloud infrastructures – otherwise you will miss key features that allow you to automate and orchestrate. For example, you should be taking advantage of features that will automate day to day management tasks, such as software updates and back-ups.


Banking Must Reduce AI Talent Gap for Digital Transformation Success

Using outside talent to improve productivity and results with data and AI technology is definitely a valid path in the short-run, especially as most banks and credit unions play catch-up in the race to leverage data insights across the organization. But, as mentioned, the ability to deploy the insights to specific product and service needs requires the experience of those who have known the business for years. Without the involvement of the users of the data and AI results, technology is deployed in a vacuum. Marketing managers need to understand the targeting and personalization methodology the models create. Product managers must understand the changes to processes and procedures that is recommended by AI technologies to ensure all of the required steps are in place for compliance purposes. And, risk managers need to feel comfortable that the assumptions made by models continue to reflect the cybersecurity requirements of the organization. The need to upgrade the skills of the consumers of data and AI solutions usually is done by training existing employees of the organization. This is usually a much more efficient and less disruptive process than trying to train technology people the internal intricacies of an organization. 


Tackling multicloud deployments with intelligent cloud management

Looking at what the public cloud providers offer in terms of a control plane for managing workloads, McQuire says Azure focuses on hybrid to edge and on-premise workload management. “Google Cloud has been a latecomer in the enterprise and going full board with cloud management,” he adds. For the moment, says McQuire, the focus is on orchestration and control, security and governance. He says this is a reflection of where IT organisations are in terms of how they are using multiple public clouds. “There is a need to understand the economic impact of moving workloads around,” he says. “Not only do you have a need to understand the performance of different IT environments, whether to deploy on-premise, in a private cloud or use one of the three public clouds, there is also a requirement to understand the economics associated with those decisions.” It is now not uncommon for IT decision-makers to standardise on one public cloud for specialist workloads such as artificial intelligence (AI), and use another for infrastructure as a service (IaaS). McQuire adds: “Two years ago, companies started running machine learning workloads with a single cloud provider. ..."


How Service Mesh Enables a Zero-Trust Network

To safeguard user data, organizations are adopting a zero-trust security model. “Zero trust security means that we’re not trusting anybody,” said Palladino. “We don’t trust our own services. We don’t trust our own team members.” Placing too much trust in users, services and teams could cause a catastrophic failure. And, “there is no bigger risk than thinking you are secure, while in reality, you are not,” he said. ... Implementing these permissions is where a service mesh comes in. The application teams often build security, yet it’s generally bad practice to build your own cybersecurity. Security for microservices requires high expertise, and standardizing connectivity between various microservices can easily result in fragmented security implementations. Instead of building your own security infrastructure, Palladino recommended utilizing service mesh. By using service mesh as a control plane for microservices, platform architects can specify specific rules and attributes to generate an identity on a per-service basis. Service mesh also removes the burden of networking from developers, enabling them to focus more on their core logic. “Application teams become consumers of connectivity, as opposed to the makers to this connectivity,” Palladino noted.



Quote for the day:

"You can't use up creativity. The more you use, the more you have." -- Maya Angelou

Daily Tech Digest - November 01, 2020

Why is Site Reliability Engineering Important?

“The term SRE surely has been introduced by Google, but directly or indirectly several companies have been doing stuff related to SRE for a long time, though I must say that Google gave it a new direction after coining the term ‘SRE.’ I have a clear view on SRE as I believe it walks hand-in-hand with DevOps. All your infrastructure, operations, monitoring, performance, scalability and reliability factors are accounted for in a nice, lean and automated system (preferably); however this is not enough. Culture is an important aspect driving the SRE aspects, along with business needs. As the norm ‘to each, his own’ goes, SRE is no different. It is easy to get inspired from pioneer companies, but it’s impossible to copy their culture and means to replicate the success, especially with your ‘anti-patterns’ and ‘traditional’ remedial baggage. Do you have similar infrastructure and business needs as the company showcasing brilliant success with SRE? No. Can it help you? Absolutely. The key factor here is to recognize what is important to your success blueprint after understanding the fundamentals of it and find your own success factors considering your cultural needs. Your strategy and culture need to walk together, just like your guiding (strategy) and driving (culture) factors.”


AI in Healthcare — Is the Future of Healthcare already here?

Through a series of neural networks, AI is helping healthcare providers achieve this balance. Facial recognition software is combined with machine learning to detect patterns in facial expressions that point us towards the possibility of a rare disease. Moon developed by Diploid enables early diagnosis of rare diseases through the software, allowing doctors to begin early treatment. Artificial Intelligence in Healthcare carries special significance in detecting rare diseases earlier than they usually could be. ... Health monitoring is already a widespread application of AI in Healthcare. Wearable health trackers such as those offered by Apple, Fitbit, and Garmin monitor activity and heart rates. These wearables are then in a position to send all of the data forward to an AI system, bringing in more insights and information about the ideal activity requirement of a person. These systems can detect workout patterns and send alerts when someone misses out their workout routine. The needs and habits of a patient can be recorded and made available to them when need be, improving the overall healthcare experience. For instance, if a patient needs to avoid heavy cardiac workout, they can be notified of the same when high levels of activity are detected.


Why kids need special protection from AI’s influence

Algorithms can change the course of children’s lives. Kids are interacting with Alexas that can record their voice data and influence their speech and social development. They’re binging videos on TikTok and YouTube pushed to them by recommendation systems that end up shaping their worldviews. Algorithms are also increasingly used to determine what their education is like, whether they’ll receive health care, and even whether their parents are deemed fit to care for them. Sometimes this can have devastating effects: this past summer, for example, thousands of students lost their university admissions after algorithms—used in lieu of pandemic-canceled standardized tests—inaccurately predicted their academic performance. Children, in other words, are often at the forefront when it comes to using and being used by AI, and that can leave them in a position to get hurt. “Because they are developing intellectually and emotionally and physically, they are very shapeable,” says Steve Vosloo, a policy specialist for digital connectivity at Unicef, the United Nations Children Fund. Vosloo led the drafting of a new set of guidelines from Unicef designed to help governments and companies develop AI policies that consider children’s needs. 


A new threat matrix outlines attacks against machine learning systems

Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE’s Decision Science research programs, says that we’re now at the same stage with AI as we were with the internet in the late 1980s, when people were just trying to make the internet work and when they weren’t thinking about building in security. We can learn from that mistake, though, and that’s one of the reasons the Adversarial ML Threat Matrix has been created. “With this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,” he noted. Also, the matrix will help them think holistically and spur better communication and collaboration across organizations by giving a common language or taxonomy of the different vulnerabilities, he says. “Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle,” MITRE noted.


Understanding the modular monolith and its ideal use cases

Conventional monolithic architectures focus on layering code horizontally across functional boundaries and dependencies, which inhibits their ability to separate into functional components. The modular monolith revisits this structure and configures it to combine the simplicity of single process communication with the freedom of componentization. Unlike the traditional monolith, modular monoliths attempt to establish bounded context by segmenting code into individual feature modules. Each module exposes a programming interface definition to other modules. The altered definition can trigger its dependencies to change in turn. Much of this rests on stable interface definitions. However, by limiting dependencies and isolating data store, the architecture establishes boundaries within the monolith that resemble the high cohesion and low coupling found in a microservices architecture. Development teams can start to parse functionality, but can do so without worrying about the management baggage tied to multiple runtimes and asynchronous communication. One benefit of the modular monolith is that the logic encapsulation enables high reusability, while data remains consistent and communication patterns simple.


What's Wrong With Big Objects In Java?

There are several ways to fix or at least mitigate this problem: tune the GC, change the GC type, fix the root cause, or upgrade to the newer JDK. Tuning GC, in this case, means increasing the heap or increasing the region size with -XX:G1HeapRegionSize so that previously Humongous objects are no longer Humongous and follow the regular allocation path. However, the latter will decrease the number of regions, that may negatively affect GC performance. It also means coupling GC options with the current workload (which may change in the future and break your current assumptions). However, in some situations, that's the only way to proceed. A more fundamental way to address this problem is to switch to the older Concurrent Mark-Sweep (CMS) garbage collector, via the -XX:+UseParNewGC -XX:+UseConcMarkSweepGC flags (unless you use one of the most recent JDK versions in which this collector is deprecated). CMS doesn't divide the heap into numerous small regions and thus doesn't have a problem handling several-MB objects. In fact, in relatively old Java versions CMS may perform even better overall than G1, at least if most of the objects that the application creates fall into two categories: very short-lived and very long-lived.


Analysis: Tactics of Group Waging Attacks on Hospitals

UNC1878 has recently changed some of its tactics. For example, it no longer uses Sendgrid to deliver the phishing emails and to supply the URLs that lead to the malicious Google documents, Mandiant reports. "Recent campaigns have been delivered via attacker-controlled or compromised email infrastructure and have commonly contained in-line links to attacker-created Google documents, although they have also used links associated with the Constant Contact service," according to the Mandiant report. Hosting the malicious documents on a legitimate service is also a new twist. Earlier campaigns were hosted on a compromised infrastructure, Mandiant researchers say. Once the group delivers a loader via a malicious document, it downloads the Powertrick backdoor and/or Cobalt Strike Beacon payloads to establish a presence and to communicate with the command-and-control server, the report says. Mandiant notes that the group uses Powertrick infrequently, perhaps for establishing a foothold and performing initial network and host reconnaissance. ... The group maintains persistence by creating a scheduled task, adding itself to the startup folder as a shortcut, creating a scheduled Microsoft BITS job using /setnotifycmdline and in some cases using stolen login credentials, the report says.


The secret to designing a positive future with AI? Imagination

Focusing on the positive is key to steering toward a positive destination. Instead of being passive passengers in a collective spaceship erring towards dangerous planets, we can instead actively move in the direction of the outcomes we want, such as full employment and equity. This is, at its heart, an exercise in vision. To be sure, realizing that vision will require a commitment to idealism, hope, and an openness towards change and uncertainty. But the vision is paramount and will set our future course. ... Building such a vision is a collective intelligence exercise that requires many voices from around the world. In taking this step, we can empower participants from various backgrounds and countries to make this vision real and identify the implications of that long-term vision for present-day policy decisions Such work can seem like a creative writing prompt but was actually a key exercise undertaken by the World Economic Forum’s Global AI Council (GAIC), a multi-stakeholder body that includes leaders from the public and private sectors, civil society and academia. In April 2020, we began pursuing an ambitious initiative called Positive AI Economic Futures, taking as its starting point the hypothesis that AI systems will eventually be able to do the great majority of what we currently call work, including all forms of routine physical and mental labour.


How Kubernetes extends to machine learning (ML)

The scalability of Kubernetes, alongside the flexibility of ML, can allow developers within the open source space to innovate without experiencing strain on their workloads. Thomas Di Giacomo, president of engineering and innovation at SUSE, explained: “Kubernetes and cloud native technologies enable a broad selection of applications because they serve as a reliable connecting mechanism for a multitude of open source innovations, ranging from supporting various types of infrastructures and adding AI and ML capabilities to help make developers’ lives simpler and business applications more streamlined. “Kubernetes facilitates fast, simple management and clear organisation of containerised services and applications. The technology also enables the automation of operational tasks, like, application availability management and scaling. “There’s no denying that AI and ML technologies will have a massive impact on the open source market. Developed by the community, AI open source projects will help to develop and train ML models, and will provide a powerful feedback loop that will enable faster innovation. “We have already witnessed that at SUSE, and having been working and developing AI ML solutions together with Kubernetes to streamline their use by data scientists who can then focus on their own needs and processes rather than the mechanics.”


REPORT: Consumer Privacy Concerns Demand Regulatory Compliance

Data privacy is gaining more attention from consumers in multiple markets, including the European Union and the United States. A study of U.S. consumers found that 87 percent feel data privacy should be considered a human right, for example. Many respondents are also wary of what businesses are doing with their information, with roughly 70 percent of consumers stating that they do not trust companies to sell their data ethically. Such trends are leading businesses and regulators to reconsider the country’s existing data privacy and security standards. Data security is also being debated in the EU. The region’s General Data Protection Regulation (GDPR) that governs online data collection and storage has been in place for approximately two years, but large companies are more frequently coming under regulatory scrutiny as consumers become more familiar with the rule. Google-owned video streaming service YouTube, for example, is currently facing a lawsuit over whether its data practices violate GDPR. A suit alleges that the platform fails to comply with GDPR because it collects data from minors, who cannot legally consent to sharing their digital information under the regulation GDPR.



Quote for the day:

'Risks are the seeds from which successes grow." -- Gordon Tredgold

Daily Tech Digest - October 31, 2020

Six frightening data stories that will give you nightmares

Acarophobia is a fear of tiny, crawling, parasitic insects; apiphobia is a fear of bees; and arachnophobia, a fear of spiders. But what is the term for a phobia of those beastly bugs that can bring down an entire server? This happened at a London advertising agency! The creative team had an important customer deadline to meet and they could no longer access their critical Adobe illustrator data and other large creative files. The disaster recovery plan would take two days to restore the data. One day after the job deadline. The clock was ticking… The problem was the recovery time objective (RTO) set up years ago, and because the longer the RTO, the lower the price, this firm thought a shorter RTO wasn’t worth it. But don’t be fooled when it comes to protecting your business-critical data, for there’s always a price to pay… You have friends coming for a Halloween party and arrive home from the supermarket, bags full of decorations, drinks, and ice, only to find that you don’t have your house key. No doubt workers who had planned to work on some company files only to realise they cannot access them when working from home feel the same way, especially during this Covid-19 pandemic. Users may be completely locked out of their data files, but more often, they face a tedious and clunky experience to access those files.


Honeywell introduces quantum computing as a service with subscription offering

The H1 has been up and running for several months internally at Honeywell, but has been in use by customers for about three weeks, said Uttley. Honeywell has been working with eight enterprise customers, including DHL, Merck, and JP Morgan Chase. Some of those customers had been working on the H0 system and were able to easily "port over" work to the new machine, said Uttley. One reason for the subscription is that there is still substantial hand-holding that happens. Those windows of time include participation with the customer by Honeywell quantum theorists, and Honeywell operations teams, who work "hand in hand" with customers. The hands-on approach of Honeywell to customer subscriptions makes sense given that much of the work that customers will be doing initially is to gain a sense of trust, said Uttley. They will be seeing what results they get from the quantum computer and matching those to the same work on a classical computer, to validate that the quantum system produces correct output. On top of the blocks of dedicated time, each subscriber can get queueing time, said Uttley, where jobs are processed as capacity is available.


JPM Coin debut marks start of blockchain’s value-driven adoption cycle

In a recent interview, JP Morgan’s global head of wholesale payments stated that the launch of JPM Coin as well as certain other “behind the scenes moves” prompted the banking giant to create a new business outfit called Onyx. The unit will allow the company to spur its focus on its various ongoing blockchain and digital currency efforts. Onyx reportedly has more than 100 staff members and has been established with the goal of commercializing JP Morgan’s various envisioned blockchain and crypto projects, moving existing ideas from their research and development phase to something more tangible. When asked about their future plans and if crypto factors majorly into the company’s upcoming scheme of things, a media relations representative for J.P. Morgan told Cointelegraph that there are no additional announcements on top of what was already unveiled recently. Lastly, on Oct. 28, the bank announced that it was going to rebrand its blockchain-based Interbank Information Network, or IIN, to “Liink” as well as introduce two new applications — Confirm and Format — that have been developed for specific purposes of account validation and fraud elimination for its clients. Liink will be a part of the Onyx ecosystem and will enable participants to collaborate with one another in a seamless fashion.


What is DevOps? with Donovan Brown

“DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.” – Donovan Brown. Why we do “DevOps” comes down to that one big word Donovan highlights… value. Our customers want the services we provide to them to always be available, to be reliable, and to let them know if something is wrong. These are the same expectations we should all take when working together to deliver the application or service our end user will experience. By producing an environment that values a common goal amongst our team, we can see greater productivity and success for our users. Donovan Brown opens the “Deliver DevOps” event at the Microsoft Reactor in the UK by taking us for a lap around Azure DevOps concluding with the announcement of a new UK geo to store your Azure DevOps data. What is DevOps? This is a question that seems to be constantly debated. Is it automation? Is it culture? Is DevOps a team? Is DevOps a philosophy? All great things to ask. By looking to DevOps, teams are able to provide the most value for their customers. In this video, Donovan Brown, Principal DevOps Manager at Microsoft, gives us what the Microsoft definition of DevOps in just a few minutes.


Driving remote workforce efficiency with IoT security

As with all cybersecurity issues, no “one size fits all” approach to IoT security exists. At the core, the IoTSCF provides guidance across compliance classes. However, it does set some specific minimum requirements for all IoT devices. Among these security controls, the IoTSCF suggests: Having an internal organizational member who owns and is responsible for monitoring the security; Ensuring that this person adheres to the compliance checklist process; Establishing a policy for interacting with internal and third-party security researchers; Establishing processes for briefing senior executives in the event the IoT device leads to a security incident; Ensuring a secure notification process for notifying partners/users; and Incorporating IoT and IoT-based security events as part of the Security Policy. From a hardware and software perspective, the following suggestions guide all compliance classes: Ensuring the product’s processor system has an irrevocable hardware Secure Boot process; Enable the Secure Boot process by default; Ensure the product prevents the ability to load unauthenticated software and files; Ensure that devices supporting remote software updates incorporate the ability to digitally sign software images ...


Why 2021 will be the year of low-code

Low-code will make it to the mainstream in 2021 with 75% of development shops adopting this platform, according to Forrester's 2021 predictions for software development. This shift is due in part to the new working environment and product demands caused by the COVID-19 crisis. Forrester analysts found that "enterprises that embraced low-code platforms, digital process automation, and collaborative work management reacted faster and more effectively than firms relying only on traditional development." ... Forrester analysts also noted the importance of adjusting communication habits and workflows in the new year. The report notes that teams that had already invested in high-trust culture, agile practices, and cloud platforms found it easier to adapt to 100% remote work. Teams that relied on a command-and-control approach to work and older platforms struggled to adjust to this new environment. ... This will require sustained attention and active management to make this happen: "Keeping developers out of endless virtual meetings while maintaining governance will particularly challenge organizations in regulated industries, and they will embrace value stream management as a way of maintaining data-informed insights and collecting process metrics that enable compliance and governance at scale."


Artificial Intelligence Is Modernizing Restaurant Industry

While technology is growing and benefiting many industries, certain industries are still struggling to survive. One such industry experiencing the battle of endurance amidst its peers is the restaurant business. 52% of the restaurant proprietors have consented to the fact that high operating and food costs appear to be the top difficulties that come their way while dealing with their business. Restaurants can undoubtedly keep steady over everything by the legitimate implementation of technology into their business. One such technology which is said to have some critical impact on this industry specialty is artificial intelligence. Almost certainly, there are various advantages of implementing artificial intelligence in restaurants like improved customer experience, more sales, less food wastage, and so forth. ... The climate is an important factor in restaurant sales. Studies show that 7 out of 10 restaurants state that weather forecasts affect their sales. Perhaps it’s bright and an ideal day to enjoy a sangria on a yard with friends, or possibly it’s cold and desolate outside and you feel like having hot cocoa at a cozy bistro. Regardless of whether it’s bright, shady, rainy, snowy or hotter than expected, customers are attracted to specific foods and beverages dependent on the conditions outside.


Flipping the Odds of Digital Transformation Success

The technology is important, but the people dimension (organization, operating model, processes, and culture) is usually the determining factor. Organizational inertia from deeply rooted behaviors is a big impediment. Failure should not be an option, and yet it is the most common result. The consequences in terms of investments of money, organizational effort, and elapsed time are massive. Digital laggards fall behind in customer engagement, process efficiency, and innovation. In contrast, companies that are successful in mastering digital technologies, establishing a digital mindset, and implementing digital ways of working can reach a new rhythm of continuous improvement. Digital, paradoxically, is not a binary state, but one of ongoing innovation as new waves of disruptive technologies are released to the market. Consider, for example, artificial intelligence, blockchain, the Internet of Things, spatial computing, and, in time, quantum computing. Unsuccessful companies will find it extremely hard to leverage these advances, while digital organizations will be innovating faster and pulling further away from digital laggards—heading for that bionic future. Digital transformations can define careers as well as companies.


SREs: Stop Asking Your Product Managers for SLOs

One of the fundamental premises of software reliability engineering is that you should base your reliability goals—i.e., your service level objectives (SLOs)—on the level of service that keeps your customers happy. The problem is, defining what makes your customers happy requires communication between software reliability engineers (SREs) and product managers (PMs) (aka business stakeholders), and that can be a challenge. Let’s just say that SREs and PMs have different goals and speak slightly different languages. It’s not that PMs fail to appreciate the value that SREs bring to the table. Today, in the era of software as a service, features such as security, reliability and data privacy are respected as critical features of the service-product a SaaS company delivers. Modern application users and customers of software services care a lot about data privacy, cybersecurity and uptime; therefore, PMs care, too. In fact, it’s not uncommon to see these features touted prominently on a company’s website because the folks in marketing know that customers are making purchasing decisions based on whether the company can deliver reliability, speed, security and performance quality. So, yes, PMs do care.


How to improve the developer experience

Developers come into a software project motivated, but it doesn't take long for that energy to get sapped. "[Onboarding] is where I feel most developers lose their initial spurt of motivation," said Chris Hill, senior manager of software development at T-Mobile. An inherited software project comes with immediate barriers to productivity, such as lacking or obscure documentation and the time a developer wastes waiting for access to the code repository and dev environment. Once work begins, the developer must grasp what the code means, how it delivers value and all the tools that are part of the dev cycle. "Every [inherited project] feels like I stepped in the middle of an IKEA build cycle, and all the parts are missing, and there are no instructions, and there's no support line, and all the screws are stripped, and I have pressure that I should come out with my first feature next week," Hill said. At T-Mobile, Hill prioritizes developer experience, which is comparable to user experience but specific to developers' work. A positive developer experience is one in which programmers can easily access the tools or resources they need and apply their expertise without unnecessary constraints.



Quote for the day:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni