Daily Tech Digest - August 14, 2020

Secure at every step: A guide to DevSecOps, shifting left, and GitOps

In practice, to hold teams accountable for what they develop, processes need to shift left to earlier in the development lifecycle, where development teams are. By moving steps like testing, including security testing, from a final gate at deployment time to an earlier step, fewer mistakes are made, and developers can move more quickly. The principles of shifting left also apply to security, not only to operations. It’s critical to prevent breaches before they can affect users, and to move quickly to address newly discovered security vulnerabilities and fix them. Instead of security acting as a gate, integrating it into every step of the development lifecycle allows your development team to catch issues earlier. A developer-centric approach means they can stay in context and respond to issues as they code, not days later at deployment, or months later from a penetration test report. Shifting left is a process change, but it isn’t a single control or specific tool—it’s about making all of security more developer-centric, and giving developers security feedback where they are. In practice, developers work with code and in Git, so as a result, we’re seeing more security controls being applied in Git.


Resilience in Deep Systems

As your system grows, the connections between microservices become more complex. Communicating in a fault-tolerant way, and keeping the data that is moving between services consistent and fresh becomes a huge challenge. Sometimes microservices must communicate in a synchronous way. However, using synchronous communications, like REST, across the entire deep system makes the various components in the chain very tightly coupled to each other. It creates an increased dependency on the network’s reliability. Also, every microservice in the chain needs to be fully available to avoid data inconsistency, or worse, system outage if one of the links in a microservices chain is down. In reality, we found that such a deep system behaves more like a monolith, or more precisely a distributed monolith, which prevents the full benefits of microservices from being enjoyed. Using an asynchronous, event-driven architecture enables your microservices to publish fresh data updates to other microservices. Unlike synchronous communication, adding more subscribers to the data is easy and will not hammer the publisher service with more traffic.


Security Jobs With a Future -- And Ones on the Way Out

"The jobs aren't the same as two or three years ago," he acknowledges. "The types of skill sets employers are looking for is evolving rapidly." Three factors have led the evolution, O'Malley says. The first, of course, is COVID-19 and the sudden need for large-scale remote workforces. "Through this we are seeing a need for people who understand zero-trust work environments," he says. "Job titles around knowing VPN [technology] and how to enable remote work with the understanding that everyone should be considered an outsider [are gaining popularity]." The next trend is cloud computing. With more organizations putting their workloads in public and private clouds, they've become less interested in hardware expertise and want people who understand the tech's complex IT infrastructure. A bigger focus on business resiliency is the third major trend. The know-how needed here emphasizes technologies that make a network more intelligent and enable it to learn how to protect itself. Think: automation, artificial intelligence, and machine learning. The Edge asked around about which titles and skills security hiring managers are interested in today. 


Agile FAQ: Get started with these Agile basics

The Agile Manifesto prioritizes working software over comprehensive documentation -- though don't ignore the latter completely. This is an Agile FAQ for newcomers and experienced practitioners alike, as many people mistakenly think they should avoid comprehensive documentation in Agile. The Agile team should produce software documentation. Project managers and teams should determine what kind of documentation will deliver the most value. Product documentation, for example, helps customers understand, use and troubleshoot the product. Process documentation represents all of the information about planning, development and release. Similarly, Agile requirements are difficult to gather, as they change frequently, but they're still valuable. Rather than set firm requirements at the start of a project, developers change requirements during a project to best suit customer wishes and needs. Agile teams iterate regularly, and they should likewise adapt requirements accordingly. ... When developers start a new project, it can be hard to estimate how long each piece of the project will take. Agile teams can typically gauge how complex or difficult a requirement will be to fulfill, relative to the other requirements.


Facebook’s new A.I. takes image recognition to a whole new level

This might seem a strange piece of research for Facebook to focus on. Better news feed algorithms? Sure. New ways of suggesting brands or content you could be interested in interacting with? Certainly. But turning 2D images into 3D ones? This doesn’t immediately seem like the kind of research you’d expect a social media giant to be investing. But it is — even if there’s no immediate plan to turn this into a user-facing feature on Facebook. For the past seven years, Facebook has been working to establish itself as a leading presence in the field of artificial intelligence. In 2013, Yann LeCun, one of the world’s foremost authorities on deep learning, took a job at Facebook to do A.I. on a scale that would be almost impossible in 99% of the world’s A.I. labs. Since then, Facebook has expanded its A.I. division — called FAIR (Facebook A.I. Research) — all over the world. Today, it dedicates 300 full-time engineers and scientists to the goal of coming up with the cool artificial intelligence tech of the future. It has FAIR offices in Seattle, Pittsburgh, Menlo Park, New York, Montreal, Boston, Paris, London, and Tel Aviv, Israel — all staffed by some of the top researchers in the field.


Honeywell Wants To Show What Quantum Computing Can Do For The World

The companies that understand the potential impact of quantum computing on their industries, are already looking at what it would take to introduce this new computing capability into their existing processes and what they need to adjust or develop from scratch, according to Uttley. These companies will be ready for the shift from “emergent” to “classically impractical” which is going to be “a binary moment,” and they will be able “to take advantage of it immediately.” The last stage of the quantum evolution will be classically impossible—"you couldn’t in the timeframe of the universe do this computation on a classical best-performing supercomputer that you can on a quantum computer,” says Uttley. He mentions quantum chemistry, machine learning, optimization challenges (warehouse routing, aircraft maintenance) as applications that will benefit from quantum computing. But “what shows the most promise right now are hybrid [resources]—“you do just one thing, very efficiently, on a quantum computer,” and run the other parts of the algorithm or calculation on a classical computer. Uttley predicts that “for the foreseeable future we will see co-processing,” combining the power of today’s computers with the power of emerging quantum computing solutions.


Data Prep for Machine Learning: Encoding

Data preparation for ML is deceptive because the process is conceptually easy. However, there are many steps, and each step is much more complicated than you might expect if you're new to ML. This article explains the eighth and ninth steps ... Other Data Science Lab articles explain the other seven steps. The data preparation series of articles can be found here. The tasks ... are usually not followed in a strictly sequential order. You often have to backtrack and jump around to different tasks. But it's a good idea to follow the steps shown in order as much as possible. For example, it's better to normalize data before encoding because encoding generates many additional numeric columns which makes it a bit more complicated to normalize the original numeric data. ... A complete explanation of the many different types of data encoding would literally require an entire book on the subject. But there are a few encoding techniques that are used in the majority of ML problem scenarios. Understanding these few key techniques will allow you to understand the less-common techniques if you encounter them. In most situations, predictor variables that have three or more possible values are encoded using one-hot encoding, also called 1-of-N or 1-of-C encoding.


NIST Issues Final Guidance on 'Zero Trust' Architecture

NIST notes that zero trust is not a stand-alone architecture that can be implemented all at once. Instead, it's an evolving concept that cuts across all aspects of IT. "Zero trust is the term for an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets and resources," according to the guidelines document. "Transitioning to [zero trust architecture] is a journey concerning how an organization evaluates risk in its mission and cannot simply be accomplished with a wholesale replacement of technology." Rose notes that to implement zero trust, organizations need to delve deeper into workflows and ask such questions as: How are systems used? Who can access them? Why are they accessing them? Under what circumstances are they accessing them? "You're building a security architecture and a set of policies by bringing in more sources of information about how to design those policies. ... It's a more holistic approach to security," Rose says. Because the zero trust concept is relatively new, NIST is not offering a list of best practices, Rose says. Organizations that want to adopt this concept should start with a risk-based analysis, he stresses. 


Compliance in a Connected World

Early threat detection and response is clearly part of the answer to protecting increasingly connected networks, because without threat, the risk, even to a vulnerable network, is low. However, ensuring the network is not vulnerable to adversaries in the first place is the assurance that many SOCs are striving for. Indeed, one cannot achieve the highest level of security without the other. Even with increased capacity in your SOC to review cyber security practices and carry out regular audits, the amount of information garnered and its accuracy, is still at risk of being far too overwhelming for most teams to cope with. For many organisations the answers lie in accurate audit automation and the powerful analysis of aggregated diagnostics data. This enables frequent enterprise-wide auditing to be carried out without the need for skilled network assessors to be undertaking repetitive, time consuming tasks which are prone to error. Instead, accurate detection and diagnostics data can be analysed via a SIEM or SOAR dashboard, which allows assessors to group, classify and prioritise vulnerabilities for fixes which can be implemented by a skilled professional, or automatically via a playbook. 


The biggest data breach fines, penalties and settlements so far

GDPR fines are like buses: You wait ages for one and then two show up at the same time. Just days after a record fine for British Airways, the ICO issued a second massive fine over a data breach. Marriott International was fined £99 million [~$124 million] after payment information, names, addresses, phone numbers, email addresses and passport numbers of up to 500 million customers were compromised. The source of the breach was Marriott's Starwood subsidiary; attackers were thought to be on the Starwood network for up to four years and some three after it was bought by Marriott in 2015. According to the ICO’s statement, Marriott “failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems.” Marriott CEO Arne Sorenson said the company was “disappointed” with the fine and plans to contest the penalty.  The hotel chain was also fined 1.5 million Lira (~$265,000) by the Turkish data protection authority — not under the GDPR legislation — for the beach, highlighting how one breach can result in multiple fines globally.



Quote for the day:

"Making the development of people an equal partner with performance is a decision you make." -- Ken Blanchard

Daily Tech Digest - August 13, 2020

Building a Banking Infrastructure with Microservices

On the whole, the goal is to make engineers autonomous as much as possible for organising their domain into the structure of the microservices they write and support. As a Platform Team, we provide knowledge and documentation and tooling to support that. Each microservice has an associated owning team and they are responsible for the health of their services. When a service moves owners, other responsibilities like alerts and code review also move over automatically. ... Code generation starts from the very beginning of a service. An engineer will use a generator to create the skeleton structure of their service. This will generate all the required folder structure as well as write boilerplate code so things like the RPC server are well configured and have appropriate metrics. Engineers can then define aspects like their RPC interface and use a code generator to generate implementation stubs of their RPC calls. Small reductions in cognitive overhead for engineers allows them to cumulatively focus on business choices and reduces the paradox of choice. We do find cases where engineers need to deviate. That’s absolutely okay; our goal is not to prescribe this structure for every single service. We allow engineers to make the choice, with the knowledge that deviations need appropriate documentation/justification and knowledge transfer.


Cybersecurity Skills Gap Worsens, Fueled by Lack of Career Development

The fundamental causes for the skill gap are myriad, starting with a lack of training and career-development opportunities. About 68 percent of the cybersecurity professionals surveyed by ESG/ISSA said they don’t have a well-defined career path, and basic growth activities, such as finding mentor, getting basic cybersecurity certifications, taking on cybersecurity internships and joining a professional organization, are missing steps in their endeavors. The survey also found that many professionals start out in IT, and find themselves working in cybersecurity without a complete skill set.  ... The COVID-19 pandemic is not helping matters on this front: “Increasingly, lockdown has driven us all online and the training industry has been somewhat slow to respond with engaging, practical training supported by skilled practitioners who can share their expertise,” Steve Durbin, managing director of the Information Security Forum, told Threatpost. “Apprenticeships, on the job learning, backed up with support training packages are the way to go to tackle head on a shortage that is not going to go away.”


The Top 10 Digital Transformation Trends Of 2020: A Post Covid-19 Assessment

Using big data and analytics has always been on a steady growth trajectory and then COVID-19 exploded and made the need for data even greater. Companies and institutions like Johns Hopkins and SAS created COVID-19 health dashboards that compiled data from a myriad of sources to help governments and businesses make decisions to protect citizens, employees, and other stakeholders. Now, as businesses are in re-opening phases, we are using data and analytics for contact tracing and to help make other decisions in the workplace. There have been recent announcements from several big tech companies including Microsoft, HPE, Oracle, Cisco and Salesforce focusing on developing data driven tools to help bring employees back to work safely — some even offering it for free to its customers. The need for data to make all business decisions has grown, but this year, we saw data analytics being used in real time to make critical business and life-saving decisions, and I am certain it won’t stop there. I expect massive continued investment from companies into data and analytics capabilities that power faster, leaner and smarter organizations in the wake of 2020’s Global Pandemic and economic strains.


How government policies are harming the IT sector | Opinion

Thanks to a series of misplaced policy choices, the government has systematically eroded the permitted operations of the Indian outsourcing industry to the point where it is no longer globally competitive. Foremost among these are the telecom regulations imposed on a category of companies broadly known as Other Service Providers (OSPs). Anyone who provides “application services” is an OSP and the term “application services” is defined to mean “tele-banking, telemedicine, tele-education, tele-trading, e-commerce, call centres, network operation centres and other IT-enabled services”. When it was first introduced, these regulations were supposed to apply to the traditional outsourcing industry, focusing primarily on call centre operations. However, it has, over the years been interpreted far more widely than originally intended. While OSPs do not require a license to operate, they do have to comply with a number of telecom restrictions. The central regulatory philosophy behind these restrictions is the government’s insistence that voice calls terminated in an OSP facility over the regular Public Switched Telephone Network (PSTN) must be kept from intermingling with those carried over the data network. 


Data science's ongoing battle to quell bias in machine learning

Data bias is tricky because it can arise from so many different things. As you have keyed into, there should be initial considerations of how the data is being collected and processed to see if there are operational or process oversight fixes that exist that could prevent human bias from entering in the data creation phase. The next thing I like to look at is data imbalances between classes, features, etc. Oftentimes, models can be flagged as treating one group unfairly, but the reason is there is not a large enough population of that class to really know for certain. Obviously, we shouldn't use models on people when there's not enough information about them to make good decisions. ... Machine learning interpretability [is about] how transparent model architectures are and increasing how intuitive and understandable machine learning models can be. It is one of the components that we believe makes up the larger picture of responsible AI. Put simply, it's really hard to mitigate risks you don't understand, which is why this work is so critical. By using things like feature importance, Shapley values, surrogate decision trees, we are able to paint a really good picture of why the model came to the conclusion it did -- and if the reason it came to that conclusion violates regulatory rules or makes common business sense.


Integration Testing ASP.NET Core Applications - Best Practices

Compared to unit tests, this allows much more of the application code to be tested together, which can rapidly validate the end-to-end behaviour of your service. These are also sometimes referred to as functional tests, since the definition of integration testing may be applied to more comprehensive multi-service testing as well. It’s entirely possible to test your applications in concert with their dependencies, such as databases or other APIs they expect to call. In the course, I show how boundaries can be defined using fakes to test your application without external dependencies, which allows your tests to be run locally during development. Of course, you can avoid such fakes to test with some real dependencies as well. This form of in-memory testing can then easily be expanded to broader testing of multiple services as part of CI/CD workflows. Producing these courses is a lot of work, but that effort is rewarded when people view the course and hopefully leave with new skills to apply in their work. If you have a subscription to Pluralsight already, I hope you’ll add this course to your bookmarks for future viewing.


How Robotic Process Automation (RPA) and digital transformation work together

RPA is not on its own an intelligent solution. As Everest Group explains in its RPA primer, “RPA is a deterministic solution, the outcome of which is known; used mostly for transactional activities and standardized processes.” Some common RPA use cases include order processing, financial report generation, IT support, and data aggregation and reconciliation. However, as organizations proceed along their digital transformation journeys, the fact that many RPA solutions are beginning to integrate cognitive capabilities increases their value proposition. For example, RPA might be coupled with intelligent character recognition (ICR) and optical character recognition (OCR). Contact center RPA applications might incorporate natural language processing (NLP) and natural language generation (NLG) to enable chatbots. “These are all elements of an intelligent automation continuum that allow a digital transformation,” Wagner says. “RPA is one piece of a lengthy continuum of intelligent automation technologies that, used together and in an integrated manner, can very dramatically change the operational cost and speed of an organization while also enhancing compliance and reducing costly errors.”


It’s not about cloud vs edge, it’s about connections

“What is wanted is a new type of networking platform that establishes a reliable, high performance, zero trust connection across the Internet — meaning one that will only connect an authorised device and authorised user using an authorised application (ie ‘zero trust’),” he said. “With zero trust, every connection is continuously assessed to identify who or what is requesting access, have they properly authenticated, and are they authorised to use the resource or service being requested — before any network access is permitted. “This can be achieved using software defined networking loaded into the edge device or embedding networking capabilities into applications with SDKs and APIs. This eliminates the need to procure, install and commissioning hardware. Unlike VPNs, these software-defined connections can be tightly segmented according to company policies (policy based access), determining which workgroups or devices can be connected, and what they can share and how. “This suggests a new paradigm: an edge-core-cloud continuum, where apps and services will run wherever most needed, connected via zero trust networking access (ZTNA) capable of securing the edge to cloud continuum end to end...."


Put Value Creation at the Center of Your Transformation

The leader of any transformation effort needs to be resilient and determined to deliver the program’s full potential. Yet that person also needs to understand and acknowledge the needs of employees during a radical upheaval. Sometimes leaders must be pragmatic—particularly when the company’s long-term survival is at stake. At other times, empathy and flexibility are more effective. One CEO brought determination and conviction to the company’s transformation, and he was able to tamp down dissent, gossip, and negative press. He was also willing to reverse his decisions on some matters. For example, one cost-reduction measure was a cutback in employee travel. Initially, the CEO told employees that they needed direct approval from him for any travel expenses above a certain amount. However, after about a year, he relaxed this policy after considering employees’ feedback. ... Transformations are a proving ground for leadership teams. They can be catalysts to long-term business success and financial performance—but companies undergoing a transformation underperform almost as often as they outperform. Our analysis shows that there is a systematic way to increase the odds of success.


The sinking fortunes of the shipping box

Another surprising problem for the global manufacturing model is that shipping has actually become less efficient, largely due to business decisions of the shippers. Maersk, the world-leading Danish firm, continued to order ever-larger container ships after the financial crisis, convinced that consumer demand would quickly resume its previous growth. When it did not, the firm and its competitors were forced to sail half-full megaships around the world. Because the ships were several meters wider than their predecessors, the process of removing containers took longer. And they were designed to travel more slowly to conserve fuel. Delays became much more common, undermining trust in the industry. Without reliable shipping, Levinson writes, firms have chosen to hold more inventory — which flies in the face of the prevailing orthodoxy. But things have changed. Inventories can act as a buffer when supply chains are in distress. For firms, “minimizing production costs was no longer the sole priority; making sure the goods were available when needed ranked just as highly.” It seems inevitable that the coronavirus pandemic will reinforce this drift back toward greater self-sufficiency in manufacturing.



Quote for the day:

"To be successful you have to be lucky, or a little mad, or very talented, or find yourself in a rapid growth field." -- Edward de Bono

Daily Tech Digest - August 12, 2020

Can behavioural banking drive financial literacy and inclusion?

In good times, the need to improve financial literacy is widely accepted by banking industry leaders and consumers alike. This important topic is regularly discussed by experts at the World Economic Forum and built into initiatives sponsored by the United Nations. Regarded as an economic good, financial literacy is critical to achieving financial inclusion. What about now, in decidedly less-than-good times? How are banks prepared to promote financial literacy for millennials and especially Gen Z, as they face a world in financial turmoil? ... The right systems helped the bank get up and running just 18 months after its initial launch announcement. Powerful, reliable technology also helped the company create a customer onboarding application that can open a new account within just five minutes. “The technology is extremely important for us,” says Frey. “It has to be fast, agile, and robust. We needed a solid workhorse with a huge amount of flexibility at the configuration level.” In 2020, Discovery will begin looking for ways to incorporate rapidly developing technologies such as artificial intelligence and machine learning into its solutions. Most important, however, is listening to customers and ensuring that the bank delivers the most pleasant, rewarding experience possible.


With DevOps, security is everybody’s responsibility. OK, so what’s next?

DevSecOps solutions are by nature designed to be preventative. The idea is to remove complexity by baking robust security methodologies into software development from the earliest stages. Get it right from the outset, and reactive firefighting is greatly reduced. Conveniently, this model – “shifting security left” to the coder rather than the expert in a fixed hierarchy – also makes sense when developing on cloud platforms that assume rapid deployment and collaboration. There is no development team, security team, or IT deployment team because they are one and the same person. In theory, that’s how security misconfigurations can be caught before they do harm. However, when it comes to cloud development, “shift left” is more talked about than practised. This situation has crept up on organisations that haven’t realised how programming culture has changed rapidly in the cloud era. “There is a lack of control in this model. With the shift into cloud development and the fact that coders can always get a better answer of Stack Overflow and GitHub, it’s become practically impossible to track the supply chain. It’s a governance problem,” says Guy Eisenkot


Surface Duo: Microsoft's $1,400 dual-screen Android phone coming September 10

Microsoft is counting on users seeing the Duo as filling an untapped niche. But for people used to thinking about carrying no more than two devices -- usually a PC/tablet or phone -- where does the Duo fit? In its first iteration, with a seemingly mediocre 11 MP camera, an older Snapdragon 855 processor and a relatively heavy form factor (about half a pound), the Duo is not going to replace my Pixel 3XL Android phone. And with a total screen size when open of 8.1 inches, the Duo is just too small to replace my PC. Panay and team are touting the Duo as a device that will give people a better way to get things done, to create and to connect. As was the case with the currently postponed, Windows 10X-based Surface Neo device, Microsoft's contention is two separate screens connected via a hinge help people work smarter and faster than they could with a single screen of any size. Officials say they've got research and years of work that backs up this claim. I do think more screen is better for almost everything, but for now, I am having trouble buying the idea that a hinge/division in the middle of two screens is going to make any kind of magic happen in my brain.


The clear Sky strategy

You need to have your eyes to the horizon and your feet on the floor. At all times. And it’s quite a discipline to do that. You see a lot of people who are consumed about managing the now, and then if you look at the last few months, there’s not been a lot of forward thinking. Then you also see other people who, perhaps the longer they are in their roles, spend more and more time thinking about the future horizon. That’s all very alluring and appealing, but they disconnect with the immediacy of what’s important today. You must try to think of both of those things and also encourage everybody else to think of their own role in that way. So, if you’re in broadcast technology today and you’re running that function or department, how do you get your colleagues to look at the future broadcast technologies and at the same time equip people to shoot with their iPhones and get the news out quickly? What you end up with is this networked brain. Everybody in Sky should be thinking about where the company should go, but also “How do I personally make sure I’m doing what is needed?”


Did Intel fail to protect proprietary secrets, or misconfigure servers? Lessons from the leak

Regardless of the circumstances, there are key takeaways from the incident. First and foremost, the unauthorized disclosure of source code and other sensitive intellectual property could potentially be a boon for those seeking to steal corporate secrets. “Intel’s technology is almost ubiquitous, and the leaked device designs and firmware source code can put businesses and individuals at risk,” said Ilia Sotnikov, VP of product management at Netwrix. “Hackers and Intel’s own security research team are probably racing now to identify flaws in the leaked source code that can be exploited. Companies should take steps to identify what technology may be impacted and stay tuned for advisory and hotfix announcements from Intel.” “While we often think of data breaches in the context of customer data lost and potential PII leakage, it is very important that we also consider the value of intellectual property, especially for very innovative organizations and organizations with a large market share,” said Erich Kron, security awareness advocate at KnowBe4. This intellectual property can be very valuable to potential competitors, and even nation states, who often hope to capitalize on the research and development done by others.”


Researchers Trick Facial-Recognition Systems

The model then continuously created and tested fake images of the two individuals by blending the facial features of both subjects. Over hundreds of training loops, the machine-learning model eventually got to a point where it was generating images that looked like a valid passport photo of one of the individuals: even as the facial recognition system identified the photo as the other person. Povolny says the passport-verification system attack scenario — though not the primary focus of the research — is theoretically possible to carry out. Because digital passport photos are now accepted, an attacker can produce a fake image of an accomplice, submit a passport application, and have the image saved in the passport database. So if a live photo of the attacker later gets taken at an airport — at an automated passport-verification kiosk, for instance — the image would be identified as that of the accomplice. "This does not require the attacker to have any access at all to the passport system; simply that the passport-system database contains the photo of the accomplice submitted when they apply for the passport," he says.


The problems AI has today go back centuries

The ties between algorithmic discrimination and colonial racism are perhaps the most obvious: algorithms built to automate procedures and trained on data within a racially unjust society end up replicating those racist outcomes in their results. But much of the scholarship on this type of harm from AI focuses on examples in the US. Examining it in the context of coloniality allows for a global perspective: America isn’t the only place with social inequities. “There are always groups that are identified and subjected,” Isaac says. The phenomenon of ghost work, the invisible data labor required to support AI innovation, neatly extends the historical economic relationship between colonizer and colonized. Many former US and UK colonies—the Philippines, Kenya, and India—have become ghost-working hubs for US and UK companies. The countries’ cheap, English-speaking labor forces, which make them a natural fit for data work, exist because of their colonial histories. AI systems are sometimes tried out on more vulnerable groups before being implemented for “real” users. Cambridge Analytica, for example, beta-tested its algorithms on the 2015 


The State of AI-Driven Digital Transformation

Governments are transforming service delivery through AI as well. In China, a number of AI pilot programmes are rolling out across the court system, including an “AI robot” that can answer legal questions in real time, tools to automate evidence analysis and the automated transcribing of court proceedings that would remove the need for judicial clerks to double as stenographers. These technological developments point to a future in which routine court procedures are mostly handled by machines, so that judges can reserve their attention for more complex and demanding cases. The other major use of AI would be in the areas of security and data privacy. In fact, the Forrester study found that 61 percent of firms in APAC are already enhancing or implementing their data privacy and security-related capabilities using AI. For example, financial services giant AXA IT has been leveraging machine learning and AI to thwart online security threats. They’ve partnered with cybersecurity firm Darktrace whose Enterprise Immune System learns how normal users behave so as to detect dangerous anomalies with the help of AI. Data lie at the heart of AI. The success of AI-driven digital transformation, therefore, relies greatly on the ability to draw insights from big data. 


How to Keep APIs Secure From Bot Attacks

Many APIs do not check authentication status when the request comes from a genuine user. Attackers exploit such flaws in different ways, such as session hijacking and account aggregation, to imitate genuine API calls. Attackers also reverse engineer mobile applications to discover how APIs are invoked. If API keys are embedded into the application, an API breach may occur. API keys should not be used for user authentication. Cybercriminals also perform credential stuffing attacks to takeover user accounts. ... Many APIs lack robust encryption between the API client and server. Attackers exploit vulnerabilities through man-in-the-middle attacks. Attackers intercept unencrypted or poorly protected API transactions to steal sensitive information or alter transaction data. Also, the ubiquitous use of mobile devices, cloud systems and microservice patterns further complicate API security because multiple gateways are now involved in facilitating interoperability among diverse web applications. The encryption of data flowing through all these channels is paramount. ... APIs are vulnerable to business logic abuse. This is exactly why a dedicated bot management solution is required and why applying detection heuristics that are good for both web and mobile apps can generate many errors — false positives and false negatives.


Blazor vs Angular

Blazor is also a framework that enables you to build client web applications that run in the browser, but using C# instead of TypeScript. When you create a new Blazor app it arrives with a few carefully selected packages (the essentials needed to make everything work) and you can install additional packages using NuGet. From here, you build your app as a series of components, using the Razor markup language, with your UI logic written using C#. The browser can't run C# code directly, so just like the Angular AOT approach you'll lean on the C# compiler to compile your C# and Razor code into a series of .dll files. To publish your app, you can use dot net's built-in publish command, which bundles up your application into a number of files (HTML, CSS, JavaScript and DLLs), which can then be published to any web server that can serve static files. When a user accesses your Blazor WASM application, a Blazor JavaScript file takes over, which downloads the .NET runtime, your application and its dependencies before running your app using WebAssembly. Blazor then takes care of updating the DOM, rendering elements and forwarding events (such as button clicks) to your application code.


AI company pivots to helping people who lost their job find a new source of health insurance

In addition to making health insurance somewhat easier to get, the Affordable Care Act funded navigators who helped individuals choose the right insurance plan. The Trump administration cut funding for the navigators from $63 million in 2016 to $10 million in 2018. During the 2019 open enrollment period for the federal ACA health insurance marketplace, overall enrollment dropped by 306,000 people. "While that may not seem like a lot, the average annual medical expense is around $3,000 per person, and a shortfall of covered patients could represent over $900,000,000 of medical expenses will not be paid by health insurance," Showalter said. When states banned elective medical procedures temporarily during the early months of the pandemic, this cut off an important revenue stream for hospitals and many laid off workers. Some of these layoffs included patient navigators who helped patients enroll in health insurance, particularly Medicaid.  Showalter said that all Jvion customers have had at least a few navigators on staff but not enough to reach every patient in need of assistance.



Quote for the day:

"A good general not only sees the way to victory; he also knows when victory is impossible." -- Polybius

Daily Tech Digest - August 11, 2020

How AI can create self-driving data centers

Data centers are full of physical equipment that needs regular maintenance. AI systems can go beyond scheduled maintenance and help with the collection and analysis of telemetry data that can pinpoint specific areas that require immediate attention. "AI tools can sniff through all of that data and spot patterns, spot anomalies," Schulz says. "Health monitoring starts with checking if equipment is configured correctly and performing to expectations," Bizo adds. With hundreds or even thousands of IT cabinets with tens of thousands of components, such mundane tasks can be labor intensive, and thus not always performed in a timely and thorough fashion." He points out that predictive equipmen- failure modeling based on vast amounts of sensory data logs can "spot a looming component or equipment failure and assess whether it needs immediate maintenance to avoid any loss of capacity that might cause a service outage." Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks, argues that enterprise data-center operators should ignore some of the overpromises and hype associated with AI, and focus on what he calls "boring innovations."


Hackers Could Use IoT Botnets to Manipulate Energy Markets

Unlike regular IoT botnets that are ubiquitous and available for hire on criminal forums, high-wattage botnets are not as practical to amass. None are known to be available for rent by would-be attackers. But over the past couple of years, researchers have begun investigating how they could be weaponized—one example looked at the possibility of mass blackouts—in anticipation that such botnets will someday emerge. Meanwhile, the idea of energy market manipulation in general is not far-fetched. The US Federal Energy Regulatory Commission investigated 16 potential market manipulation cases in 2018, though it closed 14 of them with no action. Additionally, in mid-May, attackers breached the IT systems of Elexon, the platform used to run the United Kingdom's energy market. The attack did not appear to result in market changes. The researchers caution that, based on their analysis, much smaller demand fluctuations than you might expect could affect pricing, and that it would take as few as 50,000 infected devices to pull off an impactful attack. In contrast, many current criminal IoT botnets contain millions of bots. Consumers whose devices are unwittingly conscripted into a high-wattage botnet would also be unlikely to notice anything amiss; attackers could intentionally turn on devices to pull power late at night or while people are likely to be out of the house. 


How to refactor God objects in C#

When multiple responsibilities are assigned to a single class, the class violates the single responsibility principle. Again, such classes are difficult to maintain, extend, test, and integrate with other parts of the application. The single responsibility principle states that a class (or subsystem, module, or function) should have one and only one reason for change. If there are two reasons for a class to change, the functionality should be split into two classes with each class handling one responsibility.  Some of the benefits of the single responsibility principle include orthogonality, high cohesion, and low coupling. A system is orthogonal if changing a component changes the state of that component only, and doesn’t affect other components of the application. In other words, the term orthogonality means that operations change only one thing without affecting other parts of the code. Coupling is defined as the degree of interdependence that exists between software modules and how closely they are connected to each other. When this coupling is high, then the software modules are interdependent, i.e., they cannot function without each other.


Cloud storage costs: How to get cloud storage bills under control

Cloud storage is not just about how much it costs per gigabyte stored. There are also costs associated with the transfer of data in and out of the cloud. In many services there are two costs: one per-GB each time servers in different domains communicate with each other, and another per-GB cost to transfer data over the internet. “For example, in AWS [Amazon Web Services], you are charged if you use a public IP address. Because you don’t buy dedicated bandwidth, there is an additional data transfer charge against each IP address – which can be a problem if you create websites and encourage people to download videos,” says Richard Blanford, CEO of IT service provider Fordway. “Every time a video is played you incur a charge, which will soon add up if several thousand people download your 100MB video.” ... The same issue applies with resilience and service recovery, where you will be charged for data traffic between domains to keep a second disaster recovery (DR) or failover environment in a different region or availability zone. “Moving data between regions and out of the public cloud also incurs a fee. Most companies that use a public cloud service pay this for day-to-day transactions, such as moving data from cloud-based storage to on-premise storage, and costs can quickly spiral as your tenancy grows,” says Blanford.


How Complexities Disturb and Improve Employee and Customer Experience

Regulations and laws often have good reasons to exist, especially in heavily regulated industries, such as banking, financials and pharma. Of course, we can find exceptions where these become difficult to justify. This rarely occurs when attentive leadership works to demonstrate how these externals either help the company or protect its stakeholders. When organizations can communicate consistently, they build buy-in and engagement. ... Develop a process that translates feedback into solutions. First, identify the balance in outcomes. If there are complexities that seem unnecessary, find out why they were designed or implemented in that way. Make sure that teams in agreement that the complexity is not solving issues as intended but aggravating others. Take steps toward a solution. If you cannot solve it, raise the error to your direct leadership or team. Identifying these sources of complexity is an incredible value, and an opportunity for the organization to improve. The fewer complexities, the more engaged employees become, the better experience an organization will deliver to its customers, partners and employees.


The Dark Side of Data

The ways that we use data have many inherent risks. There are hidden dangers in algorithmic decision making. Data is imperfect and algorithms often have built-in biases—biases that all too frequently have traumatic impacts on individuals and families as in this example of facial recognition gone wrong. Cathy O’Neil describes the risks of algorithmic dependencies and decisions in depth in her book Weapons of Math Destruction. We have yet to master data governance and data ethics. And now we need to step up to algorithmic governance and data science ethics. The abundance of data and the immense power that we have to process data bring both opportunity and risk. It is a grave error to pursue the opportunities without also making a serious commitment to managing the risks. Yes, data is informative and valuable—sometimes even invigorating and exciting. But we use it badly with too much attention to profits and too little attention to people. Data has a dark side of misuse and abuse. We are failing to step up to the real value opportunities of data—to improve the human condition—and we are failing to mitigate the risks inherent in modern data capabilities.


AI Recruiting Tools Aim to Reduce Bias in the Hiring Process

“One of the unintended consequences would be to continue this historical trend, particularly in tech, where underserved groups such as African Americans are not within a sector that happens to have a compensation that is much greater than others,” says Fay Cobb Payton, a professor of information technology and analytics at North Carolina State University, in Raleigh. “You’re talking about a wealth gap that persists because groups cannot enter [such sectors], be sustained, and play long term.” Payton and her colleagues highlighted several companies—including GapJumpers—that take an “intentional design justice” approach to hiring diverse IT talent in a paper published last year in the journal Online Information Review. According to the paper’s authors, there is a broad spectrum of possible actions that AI hiring tools can perform. Some tools may just provide general suggestions about what kind of candidate to hire, whereas others may recommend specific applicants to human recruiters, and some may even make active screening and selection decisions about candidates. But whatever the AI’s role in the hiring process, there is a need for humans to have the capability to evaluate the system’s decisions and possibly override them.


Data Bytes: Gartner on the IaaS Boom, Plus Cloud Geography, IoT Security

“Enterprises and providers must work together to prioritize and support IoT security requirements,” said Alexandra Rehak, Internet of Things Practice Head at Omdia. “Providers need to make sure IoT security solutions are simple and can be easily understood and integrated. Given how high a priority this is for enterprise end users, providers also need to do more to educate customers as well as providing technology solutions, to help ensure IoT security is not a barrier for adoption.” When it comes to the medium- to long-term focus for IoT industry leaders, 81% agreed that 5G would “transform” the industry. The top two benefits from 5G deployment are expected to be the ability to manage a massive number of IoT devices (67%) and the ability to achieve ultra-low latency (60%), allowing businesses to be even more agile. “COVID-19 is expected to impact IoT in 2020,” said Zach Butler, Director, IoT World. “Despite this, Omdia forecasts potential in some segments including connected health, as innovators use IoT technologies to tackle some of the pressing crises of the moment. Long-term, there is little doubt that 5G will change the face of IoT, particularly in the automotive and manufacturing sectors.


Ransomware: These warning signs could mean you are already under attack

Encryption of files by ransomware is the last thing that happens; before that, the crooks will spend weeks, or longer, investigating the network to discover weaknesses. One of the most common routes for ransomware gangs to make their way into corporate networks is via Remote Desktop Protocol (RDP) links left open to the internet. "Look at your environment and understand what your RDP exposure is, and make sure you have two-factor authentication on those links or have them behind a VPN," said Jared Phipps, VP at security company SentinelOne. Coronavirus lockdown means that more staff are working from home, and so more companies have opened up RDP links to make remote access easier. This is giving ransomware gangs an opening, Phipps said, so scanning your internet-facing systems for open RDP ports is a first step. Another warning sign could be unexpected software tools appearing on the network. Attackers may start with control of just one PC on a network – perhaps via a phishing email. With this toe-hold in the network, hackers will explore from there to see what else they can find to attack.


Creating a Progressive Web App with Blazor WebAssembly

The biggest challenges in designing your PWA are based on why you're creating a PWA. There are, at least, three reasons for creating a PWA. One is that you simply want your app to start faster (presumably a local copy of the app will start faster for your user than navigating to your site and downloading the app). Another good reason for creating a PWA is to reduce demands on the server: Because navigation between pages is handled locally, the demand on your server is reduced to just the REST requests that your app may make (especially if new versions of your app are downloaded from a different server, further reducing demand on your application server). A third reason (and probably the one that you're thinking of) is to enable your app to work even when the user doesn't have a network connection. Be aware: That option opens you up to a potential world of hurt. To begin with you need to realize that, while the user is offline, you won't be able to authenticate your user. You'll have to decide what functionality you'll provide to a "non-authenticated user" -- all you know about "non-authenticated" users is that they have successfully logged onto the device that your app is running on.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr

Daily Tech Digest - August 10, 2020

Computer vision: Why it’s hard to compare AI and human perception

In the seemingly endless quest to reconstruct human perception, the field that has become known as computer vision, deep learning has so far yielded the most favorable results. Convolutional neural networks (CNN), an architecture often used in computer vision deep learning algorithms, are accomplishing tasks that were extremely difficult with traditional software. However, comparing neural networks to the human perception remains a challenge. And this is partly because we still have a lot to learn about the human vision system and the human brain in general. The complex workings of deep learning systems also compound the problem. Deep neural networks work in very complicated ways that often confound their own creators. In recent years, a body of research has tried to evaluate the inner workings of neural networks and their robustness in handling real-world situations. ... The researchers note that the human visual system is naturally pre-trained on large amounts of abstract visual reasoning tasks. This makes it unfair to test the deep learning model on a low-data regime, and it is almost impossible to draw solid conclusions about differences in the internal information processing of humans and AI.


How To Close The Distance On Remote Work: The Most Important Leadership Skill

In terms of mindset, your perspective is important. One of my colleagues (an especially responsive leader herself) says her grandmother has a gift for making each grandchild feel valued and unique. Great leadership is like this as well. While no one should play favorites, it’s powerful for each team member to feel they matter and know you appreciate them and their contribution. When you give people responsibility and trust them to do good work, you won’t have to be as involved in the work they’re doing. Your time will be spent coaching, developing and making decisions where your perspective or position are most critical. You should set guardrails—for example spending more than a certain amount of money or which key topics require your input or decision-making—but within those boundaries, set people free. By not being too deeply in the details, you’ll have more time to be accessible where you’re needed most. Another mindset to help you be more responsive is to know your people well. When you have a good sense of what motivates each employee and what their unique needs are, you’re able to tune your messages. You’ll be more responsive when you’re able to meet employees where they are and provide the information or direction they need most.


2035's Biggest AI Threat Is Already Here

Unlike a robot siege that might damage property, the harm caused by these deep fakes was the erosion of trust in people and society itself. The threat of A.I. may seem to be forever stuck in the future — after all, how can A.I. harm us when my Alexa can't even correctly give a weather report? — but Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL which funded the study, explains that these threats will only continue to grow in sophistication and entanglement with our daily lives. "We live in an ever-changing world which creates new opportunities - good and bad," Johnson warns. "As such, it is imperative that we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new 'crime harvests' occur." While the authors concede that the judgments made in this study are inherently speculative in nature and influenced by our current political and technical landscape, they argue that the future of these technologies cannot be removed for those environments either. HOW DID THEY DO IT — In order to make these futuristic judgments, the researchers gathered a team of 14 academics in related fields, seven experts from the private sector, and 10 experts from the public sector.


Fintech 2020: 5 trends shaping the future of the industry

One thing a consumer prefers the most would be, multiple services across one platform. Many Fintech brands have already rolled out this process of offering multiple services across one app, but the increase in offerings of robust solutions through powerful API integrations will add on. In the coming days, consumers who need banking services are likely to turn to those financial players, who can offer convenience and ease of transactions that is entirely safe and secure. To address these consumer needs, banks cannot do much, but technology can help a lot in digitalizing consumer demand. Blockchain and Big Data are two technologies in full swing, but they are also two complementary technologies. According to experts, brands adopting burgeoning blockchain technology will benefit the most. Financial services will be able to reduce fraudulent activities, phishing attacks and ensure secure payments. One of the other things that Fintech needs to bring their attention to is—Artificial Intelligence, Machine Learning and Data Analytics. As all these can help financial services in addressing their key challenges like cost reduction and scrutinize risky transactions.


The dark side of Israeli cybersecurity firms

The common denominator of these companies is their definition as cybersecurity firms. "The law doesn't allow companies or individuals to get involved with offensive cyber," according to Dr. Harel Menashri, head of the cyber department at the Holon Institute of Technology, who was a co-founder of the Shin Bet Cyber Warfare Unit. "The Israeli cyber industry has made itself a good name regarding advanced capabilities ... One of the greatest advantages of the Israeli culture is the ability to develop and move around things very quickly. Even if I didn't serve in the same unit with someone who I'm interested in, I'll probably know someone who did," Menashri added. "Israelis gain their technological knowledge during their military service through units like 8200 and the cyber units of Shin Bet and the Mossad. That knowledge is a weapon, and today, quite a few IDF veterans from intelligence units move abroad and share their knowledge with foreign parties." Menshari gave the example of a group of young Israelis who had graduated the IDF's elite Unit 8200 and a few months ago decided to go and work for the UAE-based intelligence firm Dark Matter after being tempted by large sums of money.


How to Build an Accessibility-First Design Culture

A great place to begin is your component library. Identify which components are used the most often and which underlying components underpin other functions. For example, make sure buttons, inputs and links have accessible focus and hover states. It’s a lucrative, efficient way of scaling accessibility fixes because once you make one fix, you’ll see it propagate throughout the organization wherever that component is used. There are a few key factors to be aware of at this stage. First, create a clear plan for who can make changes and how you’re testing components to ensure accessibility features are not unintentionally removed. Second, your work doesn’t end after creating accessible components. In the UI, individual components are put together like puzzle pieces, and just because each piece is accessible doesn’t mean the entire UI will be. Since the UI involves multiple components talking to each other, you’ll need to ensure that the experience is usable and accessible as a whole. The goal is to ensure every existing and new component in a library is accessible by default. This way, when developers pull features into their work, they’ll know with certainty it’s designed to be accessible. Get it right once, and you get it right everywhere.


Powering the Era of Smart Cities

A priority for cities in the years to come will be reducing air pollution levels. This is already a major concern – nine in ten people breathe polluted air resulting in seven million deaths every year, according to the World Health Organisation. As city populations and traffic volumes boom, the role of smart technology in tackling pollution will be crucial. While data on emissions and congestion has been available for some time, only recently have we been able to build a full picture of its reach and harm. Fusing data from various sources can reveal new insights to be used to manage energy use and minimise pollution. For example, IoT sensor technology can intelligently detect when there is little or even no pedestrian or road traffic, dimming streetlights autonomously and saving energy. By crunching vehicle rates in real time, as well as pressure, temperature and humidity, air quality levels can be accurately predicted and mapped. This provides the insight to proactively adapt traffic controls and mitigate harm. As always, the smart move is to analyse and adopt best practices from other cities and nations. Singapore, for example, is generally considered to be the global smart city leader, much due to significant government investments in digital innovation and connected technologies.


The future of tech in healthcare: wearables?

IoT and wearable devices are ideally placed to transform the management of both preventable and chronic diseases and represent a big opportunity for digital to disrupt the industry. Data on human health can now be collated to a level and scale that was never before possible, while innovations in machine learning and adaptive algorithms provide credible predictors for the risk of diseases. Such data gives us actionable insight, empowering us to make small but significant changes to lifestyle habits so we may work towards living a longer, healthier life. The opportunity, however, does not come without challenges, and two of the biggest obstacles that must be negotiated lie in the budgetary and the clinical. On the financial side, the system either lives or dies depending on whether doctors have the additional time and expertise to interpret and implement a treatment plan based on the assessment of vast reams of data. On the clinical side, non-medically graded user-generated data makes it challenging for a doctor to include this within the overall treatment decision-making process. The strength of AI and machine learning, of course, is that it can cope with large amounts of data and find statistical correlations where they exist.


Microsoft unveils Open Service Mesh, vows donation to CNCF

Open Service Mesh builds on SMI, which is expressly not a service mesh implementation, but rather a set of standard API specifications designed within CNCF. If followed, the specs allow service mesh interoperability across multiple types of networks, including other service meshes, and public, private and hybrid clouds. The service mesh layer will be a key component of broadly accessible, real-world multi-cloud container portability as mainstream enterprise cloud-native applications advance, Pullen said. “Service mesh should help that, theoretically, especially if there’s standardization of it, but it’s going to require an interesting rework to make any Docker container compatible with any container cluster,” he said. “It’s more than putting something in Docker, it’s about that ability to route services in a somewhat decoupled way.” Simplicity and ease of use was also a point of emphasis in Microsoft’s OSM rollout, which analysts said seemed to target another common complaint about operational complexity among early adopters of Istio. OSM, by contrast, will build in some services that have been complex for service mesh early adopters to set up themselves, such as mutual TLS authentication.


Understanding What Good Agile Looks Like

Agile management began as a work of passion. It was born of a fierce desire felt by disgruntled software developers to set things right. Their Agile Manifesto (2001) not only succeeded in its modest goal of "uncovering better ways of developing software.” It had the unintended consequence of generating a candidate as the paradigm for 2020 management generally. Thus, Agile management began with exploring more nimble processes for one team, then several teams, then many teams and then the whole organization. It set in train the emergence of firms like Amazon and Google that not only showered benefits on their customers and users but also, for better or worse, developed the capacity to dominate the entire planet. As society now struggles to decide what to do about these new behemoths, it is useful to keep their possible flaws conceptually separate from the principles, processes and practices that enabled them to grow so fast. We need to keep in mind what good Agile looks like—essentially a better way for human beings to create more value other human beings. In any established organization, a small set of fairly stable principles (also known as mindset or management model) tends to guide decision-making throughout the organization. 



Quote for the day:

“Strength and growth come only through continuous effort and struggle.” -- Napoleon Hill

Daily Tech Digest - August 09, 2020

Grassroots Data Security: Leveraging User Knowledge to Set Policy

Today, the IT team owns the entire problem. They write rules to discover and characterize content (What is this file? Do we care about it?). They write more rules to evaluate that content (Is it stored in the right place? Is it marked correctly?). Then they write still more rules to enforce a policy (block, quarantine, encrypt, log). Unsurprisingly, complexity, maintenance overhead, false positives and security lapses are inevitable. It turns out data security policies are already defined. They’re hiding in plain sight. That’s because content creators are also the content experts and they’re demonstrating policy as they go. A sales team, for example, manages hundreds of quotes, contracts and other sensitive documents. The way they mark, store, share and use them defines an implicit data security policy. Every group of similar documents has an implicit policy defined by the expert content creators themselves. The problem, of course, is how to extract that grassroots wisdom. Deep learning gives us two tools to do it: representation learning and anomaly detection. Representation learning is the ability to process large amounts of information about a group of “things” (files in our case) and categorize those things. For data security, advances in natural language processing now give us insights into a document’s meaning that are far richer and more accurate than simple keyword matches.


IoT governance: how to deal with the compliance and security challenges

According to Ted Wagner, CISO at SAP NS2, the topics that should be included in any IoT governance program are “software and hardware vulnerabilities, and compliance with security requirements — whether they be regulatory or policy based.” He refers to a typical use case of when a software flaw is discovered within an IoT device. In this instance, it is important to determine the severity of the flaw. Could it lead to a security incident? How quickly does it need to be addressed? If there is no way to patch the software, is there another way to protect the device or mitigate the risk? “A good way to deal with IoT governance is to have a board as a governance structure. Proposals are presented to the board, which is normally made up of 6-12 individuals who discuss the merits of any new proposal or change. They may monitor ongoing risks like software vulnerabilities by receiving periodic vulnerability reports that include trends or metrics on vulnerabilities. Some boards have a lot of authority, while others may act as an advisory function to an executive or a decision maker,” Wagner advises.


Smart locks opened with nothing more than a MAC address

Young reached out to U-Tec on November 10, 2019, with his findings. The company told Young not to worry in the beginning, claiming that "unauthorized users will not be able to open the door." The cybersecurity researcher then provided them with a screenshot of the Shodan scrape, revealing active customer email addresses leaked in the form of MQTT topic names. Within a day, the U-Tec team made a few changes, including the closure of an open port, adding rules to prevent non-authenticated users from subscribing to services, and "turning off non-authenticated user access." While an improvement, this did not resolve everything.  "The key problem here is that they focused on user authentication but failed to implement user-level access controls," Young commented. "I demonstrated that any free/anonymous account could connect and interact with devices from any other user. All that was necessary is to sniff the MQTT traffic generated by the app to recover a device-specific username and an MD5 digest which acts as a password." After being pushed further, U-Tec spent the next few days implementing user isolation protocols, resolving every issue reported by Tripwire within a week.


RPA competitors battle for a bigger prize: automation everywhere

Competitive dynamics are heating up. The two emergent leaders, Automation Anywhere Inc. and UiPath Inc., are separating from the pack. Large incumbent software vendors such as Microsoft Corp., IBM Corp. and SAP SE are entering the market and positioning RPA as a feature. Meanwhile, the legacy business process players continue to focus on taking their installed bases on a broader automation journey. However, all three of these constituents are on a collision course in our view where a deeper automation objective is the “north star.” First, we have expanded our thinking on the RPA total available market and we are extending this toward a broader automation agenda more consistent with buyer goals. In other words, the TAM is much larger than we initially thought and we’ll explain why. Second, we no longer see this as a winner-take-all or winner-take-most market. In this segment we’ll look deeper into the leaders and share some new data. In particular, although it appeared in our previous analysis that UiPath was running the table on the market, we see a more textured competitive dynamic setting up and the data suggests that other players, including Automation Anywhere and some of the larger incumbents, will challenge UiPath for leadership in this market. 


Unlocking Industry 4.0: Understanding IoT In The Age Of 5G

The challenge is not just about bandwidth. Different IoT systems will have different network requirements. Some devices will demand absolute reliability where low latency will be critical, while other use cases will see networks having to cope with a much higher density of connected devices than we’ve previously seen. For example, within a production plant, one day simple sensors might collect and store data and communicate to a gateway device that contains application logic. In other scenarios, IoT sensor data might need to be collected in real-time from sensors, RFID tags, tracking devices, even mobile phones across a wider area via 5G protocols. Bottom line: Future 5G networks could help enable a number of IoT and IIoT use cases and benefits in the manufacturing industry. Looking ahead, don’t be surprised if you see these five use cases transform with strong, reliable connectivity from multi-spectrum 5G networks currently being built and the introduction of compatible devices. With IoT/IIoT, manufacturers could connect production equipment and other machines, tools, and assets in factories and warehouses, providing managers and engineers with more visibility into production operations and any issues that might arise.


The case for microservices is upon us

For many businesses, monolithic architecture has been and will continue to be sufficient. However, with the rise of mobile browsing and the growing ubiquity of omnichannel service delivery, many businesses are finding their code libraries become more convoluted and difficult to maintain with each passing year.  As businesses scale and expand their business capabilities, they often run into the issue that the code behind their various components is too tightly bound in a monolithic structure. This makes it difficult to deploy updates and fixes because change cycles are tied together, which means they need to update the whole system at once instead of simply updating the single function that needs improvement.  Microservices architecture is one of the ways companies are overhauling their tech stacks to keep up with modern DevOps best practices and future proof their operations, making them more flexible and agile.  Given the rapid pace of change where technologies and consumer expectations are concerned, businesses that do not build capacity for agility and scalability into their business model are placing themselves at a disadvantage – particularly at a time when businesses are being forced to pivot frequently in response to widespread market instability.


Game of Microservices

A microservice works best when it has it's own private database (database per service). This ensures loose coupling with other services and the data integrity will be maintained i.e. each microservice controls and updates it's own data. ... A SAGA is a sequence of local transactions. In SAGA, a set of services work in tandem to execute a piece of functionality and each local transaction updates the data in one service and sends an event or a message that triggers the next transaction in other services. The architecture for microservices mandates (usually) the Database per Service paradigm. The monolithic approach though having it's own operational issues, it does deal with transactions very well. It truly offers a inherent mechanism to provide ACID transactions and also roll-back in cases of failure. In contrast, in the Microservices approach as we have distributed the data and the datasources based on the service, there might be cases where some transactions, spreads over multiple services. Achieving transactional guarantees in such cases is of high importance or else we tend to lose data consistency and the application can be in an unexpected state. A mechanism to ensure data consistency across services is following the SAGA approach. SAGA ensures data consistency across services.


Metadata Repository Basics: From Database to Data Architecture

While knowledge graphs have shown potential for the metadata repository to find relationship patterns among large amounts of information, some businesses want more from a metadata repository. Streaming data ingested into databases from social media and IoT sensors, also need to be described. According to a New Stack survey of 800 professionals developers, real-time data use has seen a significant increase. What does this mean for the metadata repository? Enterprises want metadata to show the who, what, why, when, and how of their data. The centralized metadata repository database answers these questions but remains too slow and cumbersome to handle large amounts of light-speed metadata. Knowledge graphs have the advantage of dealing with lots of data and quickly. However, knowledge graphs display only specific types of patterns in their metadata repository. Companies need another metadata repository tool. Here comes the data catalog, a metadata repository informing consumers what data lives in data systems and the context of this data. 


Why edge computing is forcing us to rethink software architectures

The perspective on cloud hardware has since shifted. The current generation of cloud focuses on expensive, high-performance hardware rather than cheap commoditised systems. For one, cloud hardware and data centre architectures are morphing into something resembling an HPC system or supercomputer. Networking has followed the same route, with technologies like infiniband EDR and photonics paving the way for ever greater bandwidth and tighter latencies between servers, while using backbones and virtual networks have led to improvements in the bandwidth between geographically distant cloud data centres. The other shift currently underway is in the layout of these platforms themselves. The cloud is morphing and merging into edge computing environments where data centres are deployed with significantly greater de-centralisation and distribution. Traditionally an entire continent may be served by a handful of cloud data centres. Edge computing moved these computing resources much closer to the end-user — virtually to every city or major town. The edge data centres of every major cloud provider are now integrated into their backbone providing a sophisticated, geographically dispersed grid.


The Importance of Reliability Engineering

SRE isn’t just a set of practices and policies—it’s a mentality on how to develop software in a culture free of blame. By embracing this new mindset, your team’s morale and camaraderie will improve, allowing everyone to work at their full potential in a psychologically safe environment. SRE teaches us that failure is inevitable. No matter how many precautions you take, incidents happen. While giving you the tools to respond effectively to these incidents, SRE also challenges us to celebrate these failures. When something new goes wrong, it means there’s a chance to learn about your systems. This attitude creates an environment of continuous learning.  When analyzing these inevitable incidents, it’s important to maintain an attitude of blamelessness. Instead of wasting time pointing fingers and finding fault, work together to find the systematic issues behind the incident. By avoiding a culture of blame and shame, engineers are less afraid to proactively raise issues. Team members will trust each other more, assuming good faith in their teammates’ choices. This spirit of blameless collaboration will transform the most challenging incidents into opportunities for growing stronger together.



Quote for the day:

"One must be convinced to convince, to have enthusiasm to stimulate the others." -- Stefan Zweig