Daily Tech Digest - August 10, 2020

Computer vision: Why it’s hard to compare AI and human perception

In the seemingly endless quest to reconstruct human perception, the field that has become known as computer vision, deep learning has so far yielded the most favorable results. Convolutional neural networks (CNN), an architecture often used in computer vision deep learning algorithms, are accomplishing tasks that were extremely difficult with traditional software. However, comparing neural networks to the human perception remains a challenge. And this is partly because we still have a lot to learn about the human vision system and the human brain in general. The complex workings of deep learning systems also compound the problem. Deep neural networks work in very complicated ways that often confound their own creators. In recent years, a body of research has tried to evaluate the inner workings of neural networks and their robustness in handling real-world situations. ... The researchers note that the human visual system is naturally pre-trained on large amounts of abstract visual reasoning tasks. This makes it unfair to test the deep learning model on a low-data regime, and it is almost impossible to draw solid conclusions about differences in the internal information processing of humans and AI.


How To Close The Distance On Remote Work: The Most Important Leadership Skill

In terms of mindset, your perspective is important. One of my colleagues (an especially responsive leader herself) says her grandmother has a gift for making each grandchild feel valued and unique. Great leadership is like this as well. While no one should play favorites, it’s powerful for each team member to feel they matter and know you appreciate them and their contribution. When you give people responsibility and trust them to do good work, you won’t have to be as involved in the work they’re doing. Your time will be spent coaching, developing and making decisions where your perspective or position are most critical. You should set guardrails—for example spending more than a certain amount of money or which key topics require your input or decision-making—but within those boundaries, set people free. By not being too deeply in the details, you’ll have more time to be accessible where you’re needed most. Another mindset to help you be more responsive is to know your people well. When you have a good sense of what motivates each employee and what their unique needs are, you’re able to tune your messages. You’ll be more responsive when you’re able to meet employees where they are and provide the information or direction they need most.


2035's Biggest AI Threat Is Already Here

Unlike a robot siege that might damage property, the harm caused by these deep fakes was the erosion of trust in people and society itself. The threat of A.I. may seem to be forever stuck in the future — after all, how can A.I. harm us when my Alexa can't even correctly give a weather report? — but Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL which funded the study, explains that these threats will only continue to grow in sophistication and entanglement with our daily lives. "We live in an ever-changing world which creates new opportunities - good and bad," Johnson warns. "As such, it is imperative that we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new 'crime harvests' occur." While the authors concede that the judgments made in this study are inherently speculative in nature and influenced by our current political and technical landscape, they argue that the future of these technologies cannot be removed for those environments either. HOW DID THEY DO IT — In order to make these futuristic judgments, the researchers gathered a team of 14 academics in related fields, seven experts from the private sector, and 10 experts from the public sector.


Fintech 2020: 5 trends shaping the future of the industry

One thing a consumer prefers the most would be, multiple services across one platform. Many Fintech brands have already rolled out this process of offering multiple services across one app, but the increase in offerings of robust solutions through powerful API integrations will add on. In the coming days, consumers who need banking services are likely to turn to those financial players, who can offer convenience and ease of transactions that is entirely safe and secure. To address these consumer needs, banks cannot do much, but technology can help a lot in digitalizing consumer demand. Blockchain and Big Data are two technologies in full swing, but they are also two complementary technologies. According to experts, brands adopting burgeoning blockchain technology will benefit the most. Financial services will be able to reduce fraudulent activities, phishing attacks and ensure secure payments. One of the other things that Fintech needs to bring their attention to is—Artificial Intelligence, Machine Learning and Data Analytics. As all these can help financial services in addressing their key challenges like cost reduction and scrutinize risky transactions.


The dark side of Israeli cybersecurity firms

The common denominator of these companies is their definition as cybersecurity firms. "The law doesn't allow companies or individuals to get involved with offensive cyber," according to Dr. Harel Menashri, head of the cyber department at the Holon Institute of Technology, who was a co-founder of the Shin Bet Cyber Warfare Unit. "The Israeli cyber industry has made itself a good name regarding advanced capabilities ... One of the greatest advantages of the Israeli culture is the ability to develop and move around things very quickly. Even if I didn't serve in the same unit with someone who I'm interested in, I'll probably know someone who did," Menashri added. "Israelis gain their technological knowledge during their military service through units like 8200 and the cyber units of Shin Bet and the Mossad. That knowledge is a weapon, and today, quite a few IDF veterans from intelligence units move abroad and share their knowledge with foreign parties." Menshari gave the example of a group of young Israelis who had graduated the IDF's elite Unit 8200 and a few months ago decided to go and work for the UAE-based intelligence firm Dark Matter after being tempted by large sums of money.


How to Build an Accessibility-First Design Culture

A great place to begin is your component library. Identify which components are used the most often and which underlying components underpin other functions. For example, make sure buttons, inputs and links have accessible focus and hover states. It’s a lucrative, efficient way of scaling accessibility fixes because once you make one fix, you’ll see it propagate throughout the organization wherever that component is used. There are a few key factors to be aware of at this stage. First, create a clear plan for who can make changes and how you’re testing components to ensure accessibility features are not unintentionally removed. Second, your work doesn’t end after creating accessible components. In the UI, individual components are put together like puzzle pieces, and just because each piece is accessible doesn’t mean the entire UI will be. Since the UI involves multiple components talking to each other, you’ll need to ensure that the experience is usable and accessible as a whole. The goal is to ensure every existing and new component in a library is accessible by default. This way, when developers pull features into their work, they’ll know with certainty it’s designed to be accessible. Get it right once, and you get it right everywhere.


Powering the Era of Smart Cities

A priority for cities in the years to come will be reducing air pollution levels. This is already a major concern – nine in ten people breathe polluted air resulting in seven million deaths every year, according to the World Health Organisation. As city populations and traffic volumes boom, the role of smart technology in tackling pollution will be crucial. While data on emissions and congestion has been available for some time, only recently have we been able to build a full picture of its reach and harm. Fusing data from various sources can reveal new insights to be used to manage energy use and minimise pollution. For example, IoT sensor technology can intelligently detect when there is little or even no pedestrian or road traffic, dimming streetlights autonomously and saving energy. By crunching vehicle rates in real time, as well as pressure, temperature and humidity, air quality levels can be accurately predicted and mapped. This provides the insight to proactively adapt traffic controls and mitigate harm. As always, the smart move is to analyse and adopt best practices from other cities and nations. Singapore, for example, is generally considered to be the global smart city leader, much due to significant government investments in digital innovation and connected technologies.


The future of tech in healthcare: wearables?

IoT and wearable devices are ideally placed to transform the management of both preventable and chronic diseases and represent a big opportunity for digital to disrupt the industry. Data on human health can now be collated to a level and scale that was never before possible, while innovations in machine learning and adaptive algorithms provide credible predictors for the risk of diseases. Such data gives us actionable insight, empowering us to make small but significant changes to lifestyle habits so we may work towards living a longer, healthier life. The opportunity, however, does not come without challenges, and two of the biggest obstacles that must be negotiated lie in the budgetary and the clinical. On the financial side, the system either lives or dies depending on whether doctors have the additional time and expertise to interpret and implement a treatment plan based on the assessment of vast reams of data. On the clinical side, non-medically graded user-generated data makes it challenging for a doctor to include this within the overall treatment decision-making process. The strength of AI and machine learning, of course, is that it can cope with large amounts of data and find statistical correlations where they exist.


Microsoft unveils Open Service Mesh, vows donation to CNCF

Open Service Mesh builds on SMI, which is expressly not a service mesh implementation, but rather a set of standard API specifications designed within CNCF. If followed, the specs allow service mesh interoperability across multiple types of networks, including other service meshes, and public, private and hybrid clouds. The service mesh layer will be a key component of broadly accessible, real-world multi-cloud container portability as mainstream enterprise cloud-native applications advance, Pullen said. “Service mesh should help that, theoretically, especially if there’s standardization of it, but it’s going to require an interesting rework to make any Docker container compatible with any container cluster,” he said. “It’s more than putting something in Docker, it’s about that ability to route services in a somewhat decoupled way.” Simplicity and ease of use was also a point of emphasis in Microsoft’s OSM rollout, which analysts said seemed to target another common complaint about operational complexity among early adopters of Istio. OSM, by contrast, will build in some services that have been complex for service mesh early adopters to set up themselves, such as mutual TLS authentication.


Understanding What Good Agile Looks Like

Agile management began as a work of passion. It was born of a fierce desire felt by disgruntled software developers to set things right. Their Agile Manifesto (2001) not only succeeded in its modest goal of "uncovering better ways of developing software.” It had the unintended consequence of generating a candidate as the paradigm for 2020 management generally. Thus, Agile management began with exploring more nimble processes for one team, then several teams, then many teams and then the whole organization. It set in train the emergence of firms like Amazon and Google that not only showered benefits on their customers and users but also, for better or worse, developed the capacity to dominate the entire planet. As society now struggles to decide what to do about these new behemoths, it is useful to keep their possible flaws conceptually separate from the principles, processes and practices that enabled them to grow so fast. We need to keep in mind what good Agile looks like—essentially a better way for human beings to create more value other human beings. In any established organization, a small set of fairly stable principles (also known as mindset or management model) tends to guide decision-making throughout the organization. 



Quote for the day:

“Strength and growth come only through continuous effort and struggle.” -- Napoleon Hill

Daily Tech Digest - August 09, 2020

Grassroots Data Security: Leveraging User Knowledge to Set Policy

Today, the IT team owns the entire problem. They write rules to discover and characterize content (What is this file? Do we care about it?). They write more rules to evaluate that content (Is it stored in the right place? Is it marked correctly?). Then they write still more rules to enforce a policy (block, quarantine, encrypt, log). Unsurprisingly, complexity, maintenance overhead, false positives and security lapses are inevitable. It turns out data security policies are already defined. They’re hiding in plain sight. That’s because content creators are also the content experts and they’re demonstrating policy as they go. A sales team, for example, manages hundreds of quotes, contracts and other sensitive documents. The way they mark, store, share and use them defines an implicit data security policy. Every group of similar documents has an implicit policy defined by the expert content creators themselves. The problem, of course, is how to extract that grassroots wisdom. Deep learning gives us two tools to do it: representation learning and anomaly detection. Representation learning is the ability to process large amounts of information about a group of “things” (files in our case) and categorize those things. For data security, advances in natural language processing now give us insights into a document’s meaning that are far richer and more accurate than simple keyword matches.


IoT governance: how to deal with the compliance and security challenges

According to Ted Wagner, CISO at SAP NS2, the topics that should be included in any IoT governance program are “software and hardware vulnerabilities, and compliance with security requirements — whether they be regulatory or policy based.” He refers to a typical use case of when a software flaw is discovered within an IoT device. In this instance, it is important to determine the severity of the flaw. Could it lead to a security incident? How quickly does it need to be addressed? If there is no way to patch the software, is there another way to protect the device or mitigate the risk? “A good way to deal with IoT governance is to have a board as a governance structure. Proposals are presented to the board, which is normally made up of 6-12 individuals who discuss the merits of any new proposal or change. They may monitor ongoing risks like software vulnerabilities by receiving periodic vulnerability reports that include trends or metrics on vulnerabilities. Some boards have a lot of authority, while others may act as an advisory function to an executive or a decision maker,” Wagner advises.


Smart locks opened with nothing more than a MAC address

Young reached out to U-Tec on November 10, 2019, with his findings. The company told Young not to worry in the beginning, claiming that "unauthorized users will not be able to open the door." The cybersecurity researcher then provided them with a screenshot of the Shodan scrape, revealing active customer email addresses leaked in the form of MQTT topic names. Within a day, the U-Tec team made a few changes, including the closure of an open port, adding rules to prevent non-authenticated users from subscribing to services, and "turning off non-authenticated user access." While an improvement, this did not resolve everything.  "The key problem here is that they focused on user authentication but failed to implement user-level access controls," Young commented. "I demonstrated that any free/anonymous account could connect and interact with devices from any other user. All that was necessary is to sniff the MQTT traffic generated by the app to recover a device-specific username and an MD5 digest which acts as a password." After being pushed further, U-Tec spent the next few days implementing user isolation protocols, resolving every issue reported by Tripwire within a week.


RPA competitors battle for a bigger prize: automation everywhere

Competitive dynamics are heating up. The two emergent leaders, Automation Anywhere Inc. and UiPath Inc., are separating from the pack. Large incumbent software vendors such as Microsoft Corp., IBM Corp. and SAP SE are entering the market and positioning RPA as a feature. Meanwhile, the legacy business process players continue to focus on taking their installed bases on a broader automation journey. However, all three of these constituents are on a collision course in our view where a deeper automation objective is the “north star.” First, we have expanded our thinking on the RPA total available market and we are extending this toward a broader automation agenda more consistent with buyer goals. In other words, the TAM is much larger than we initially thought and we’ll explain why. Second, we no longer see this as a winner-take-all or winner-take-most market. In this segment we’ll look deeper into the leaders and share some new data. In particular, although it appeared in our previous analysis that UiPath was running the table on the market, we see a more textured competitive dynamic setting up and the data suggests that other players, including Automation Anywhere and some of the larger incumbents, will challenge UiPath for leadership in this market. 


Unlocking Industry 4.0: Understanding IoT In The Age Of 5G

The challenge is not just about bandwidth. Different IoT systems will have different network requirements. Some devices will demand absolute reliability where low latency will be critical, while other use cases will see networks having to cope with a much higher density of connected devices than we’ve previously seen. For example, within a production plant, one day simple sensors might collect and store data and communicate to a gateway device that contains application logic. In other scenarios, IoT sensor data might need to be collected in real-time from sensors, RFID tags, tracking devices, even mobile phones across a wider area via 5G protocols. Bottom line: Future 5G networks could help enable a number of IoT and IIoT use cases and benefits in the manufacturing industry. Looking ahead, don’t be surprised if you see these five use cases transform with strong, reliable connectivity from multi-spectrum 5G networks currently being built and the introduction of compatible devices. With IoT/IIoT, manufacturers could connect production equipment and other machines, tools, and assets in factories and warehouses, providing managers and engineers with more visibility into production operations and any issues that might arise.


The case for microservices is upon us

For many businesses, monolithic architecture has been and will continue to be sufficient. However, with the rise of mobile browsing and the growing ubiquity of omnichannel service delivery, many businesses are finding their code libraries become more convoluted and difficult to maintain with each passing year.  As businesses scale and expand their business capabilities, they often run into the issue that the code behind their various components is too tightly bound in a monolithic structure. This makes it difficult to deploy updates and fixes because change cycles are tied together, which means they need to update the whole system at once instead of simply updating the single function that needs improvement.  Microservices architecture is one of the ways companies are overhauling their tech stacks to keep up with modern DevOps best practices and future proof their operations, making them more flexible and agile.  Given the rapid pace of change where technologies and consumer expectations are concerned, businesses that do not build capacity for agility and scalability into their business model are placing themselves at a disadvantage – particularly at a time when businesses are being forced to pivot frequently in response to widespread market instability.


Game of Microservices

A microservice works best when it has it's own private database (database per service). This ensures loose coupling with other services and the data integrity will be maintained i.e. each microservice controls and updates it's own data. ... A SAGA is a sequence of local transactions. In SAGA, a set of services work in tandem to execute a piece of functionality and each local transaction updates the data in one service and sends an event or a message that triggers the next transaction in other services. The architecture for microservices mandates (usually) the Database per Service paradigm. The monolithic approach though having it's own operational issues, it does deal with transactions very well. It truly offers a inherent mechanism to provide ACID transactions and also roll-back in cases of failure. In contrast, in the Microservices approach as we have distributed the data and the datasources based on the service, there might be cases where some transactions, spreads over multiple services. Achieving transactional guarantees in such cases is of high importance or else we tend to lose data consistency and the application can be in an unexpected state. A mechanism to ensure data consistency across services is following the SAGA approach. SAGA ensures data consistency across services.


Metadata Repository Basics: From Database to Data Architecture

While knowledge graphs have shown potential for the metadata repository to find relationship patterns among large amounts of information, some businesses want more from a metadata repository. Streaming data ingested into databases from social media and IoT sensors, also need to be described. According to a New Stack survey of 800 professionals developers, real-time data use has seen a significant increase. What does this mean for the metadata repository? Enterprises want metadata to show the who, what, why, when, and how of their data. The centralized metadata repository database answers these questions but remains too slow and cumbersome to handle large amounts of light-speed metadata. Knowledge graphs have the advantage of dealing with lots of data and quickly. However, knowledge graphs display only specific types of patterns in their metadata repository. Companies need another metadata repository tool. Here comes the data catalog, a metadata repository informing consumers what data lives in data systems and the context of this data. 


Why edge computing is forcing us to rethink software architectures

The perspective on cloud hardware has since shifted. The current generation of cloud focuses on expensive, high-performance hardware rather than cheap commoditised systems. For one, cloud hardware and data centre architectures are morphing into something resembling an HPC system or supercomputer. Networking has followed the same route, with technologies like infiniband EDR and photonics paving the way for ever greater bandwidth and tighter latencies between servers, while using backbones and virtual networks have led to improvements in the bandwidth between geographically distant cloud data centres. The other shift currently underway is in the layout of these platforms themselves. The cloud is morphing and merging into edge computing environments where data centres are deployed with significantly greater de-centralisation and distribution. Traditionally an entire continent may be served by a handful of cloud data centres. Edge computing moved these computing resources much closer to the end-user — virtually to every city or major town. The edge data centres of every major cloud provider are now integrated into their backbone providing a sophisticated, geographically dispersed grid.


The Importance of Reliability Engineering

SRE isn’t just a set of practices and policies—it’s a mentality on how to develop software in a culture free of blame. By embracing this new mindset, your team’s morale and camaraderie will improve, allowing everyone to work at their full potential in a psychologically safe environment. SRE teaches us that failure is inevitable. No matter how many precautions you take, incidents happen. While giving you the tools to respond effectively to these incidents, SRE also challenges us to celebrate these failures. When something new goes wrong, it means there’s a chance to learn about your systems. This attitude creates an environment of continuous learning.  When analyzing these inevitable incidents, it’s important to maintain an attitude of blamelessness. Instead of wasting time pointing fingers and finding fault, work together to find the systematic issues behind the incident. By avoiding a culture of blame and shame, engineers are less afraid to proactively raise issues. Team members will trust each other more, assuming good faith in their teammates’ choices. This spirit of blameless collaboration will transform the most challenging incidents into opportunities for growing stronger together.



Quote for the day:

"One must be convinced to convince, to have enthusiasm to stimulate the others." -- Stefan Zweig

Daily Tech Digest - August 08, 2020

Safeguarding the use of complex algorithms and machine learning

The immediate fallouts of algorithmic risks can include inappropriate and potential illegal decisions. And they can affect a range of functions, such as finance, sales and marketing, operations, risk management, information technology, and human resources. Algorithms operate at faster speeds in fully automated environments, and they become increasingly volatile as algorithms interact with other algorithms or social media platforms. Therefore, algorithmic risks can quickly get out of hand. Algorithmic risks can also carry broader and long-term implications across a range of risks, including reputation, financial, operational, regulatory, technology, and strategic risks. Given the potential for such long-term negative implications, it’s imperative that algorithmic risks be appropriately and effectively managed. ... A good starting point for implementing an algorithmic risk management framework is to ask important questions about the preparedness of your organization to manage algorithmic risks. For example: Does your organization have a good handle on where algorithms are deployed?; Have you evaluated the potential impact should those algorithms function improperly?; Does senior management within your organization understand the need to manage algorithmic risks?


How Decision Transformation is Essential to Digital Transformation

The human is better at telling them that the customers aren't happy. And the fact that the customer is unhappy is a crucial determinant in how the decision should be made. So instead of ignoring it in the automation, and throwing up an answer, and then having the person go, "Well that was a stupid answer because this customer is unhappy." Go ahead and ask the person, is the customer unhappy, and if they say yes or no, then use that as part of the decision making. So we find that there's often a role for humans in decision making, but it's often not this supervisory, "You make a suggestion, I'll override it if I feel like it kind of thing." And so we find you have to really understand the structure of your decision making before you can make those judgments. So we encourage people, when we're working with them—Look, let's understand the decision making first and let's understand all of it, automated pieces and the manual pieces. And once we understand all of it, then we can draw a suitable automation boundary to figure out which pieces to digitize, which technologies to use, and make it an integrated whole.


3 Daunting Ways Artificial Intelligence Will Transform The World Of Work

Even in seemingly non-tech companies (if there is such a thing in the future), the employee experience will change dramatically. For one thing, robots and cobots will have an increasing presence in many workplaces, particularly in manufacturing and warehousing environments. But even in office environments, workers will have to get used to AI tools as “co-workers.” From how people are recruited, to how they learn and develop in the job, to their everyday working activities, AI technology and smart machines will play an increasingly prominent role in the average person's working life. Just as we've all got used to tools like email, we'll also get used to routinely using tools that monitor workflows and processes and make intelligent suggestions about how things could be done more efficiently. Tools will emerge to carry out more and more repetitive admin tasks, such as arranging meetings and managing a diary. And, very likely, new tools will monitor how employees are working and flag up when someone is having trouble with a task or not following procedures correctly. On top of this, workforces will become decentralized  – which means the workers of the future can choose to live anywhere, rather than going where the work is.


Facebook open-sources one of Instagram's security tools

While most static analyzers look for a wide range of bugs, Pysa was specifically developed to look for security-related issues. More particularly, Pysa tracks "flows of data through a program." How data flows through a program's code is very important. Most security exploits today take advantage of unfiltered or uncontrolled data flows. For example, a remote code execution (RCE), one of today's worst types of bugs, when stripped down, is basically a user input that reaches unwanted portions of a codebase. Under the hood, Pysa aims to bring some insight into how data travels across codebases, and especially large codebases made up of hundreds of thousands or millions of lines of code. This concept isn't new and is something that Facebook has already perfected with Zoncolan, a static analyzer that Facebook released in August 2019 for Hack -- the PHP-like language variation that Facebook uses for the main Facebook app's codebase. Both Pysa and Zoncolan look for "sources" (where data enters a codebase) and "sinks" (where data ends up). Both tools track how data moves across a codebase, and find dangerous "sinks," such as functions that can execute code or retrieve sensitive user data.


Google’s New TF-Coder Tool Claims To Achieve Superhuman Performance

TF-Coder uses two ML models in order to predict the needed operations from features of the input/output tensors and a natural language description of the task. These predictions are then combined within a general framework to modify the weights to customise the search process for the given task.  The researchers introduced three key ideas in the synthesis algorithm. Firstly, they introduced per-operation weights to the prior algorithm, allowing TF-Coder to enumerate over TensorFlow expressions in order of increasing complexity. Secondly, they introduced a novel, flexible, and efficient type- and value-based filtering system that handles arbitrary constraints imposed by the TensorFlow library, such as “the two tensor arguments must have broadcastable shapes.” Finally, they developed a framework to combine predictions from multiple independent machine learning models that choose operations to prioritise during the search, conditioned on features of the input and output tensors and a natural language description of the task. The researchers evaluated TF-Coder on 70 real-world tensor transformation tasks from StackOverflow and from an industrial setting.


Microservice Architecture in ASP.NET Core with API Gateway

A traditional Approach would be do Build a Single Solution on Visual Studio and then Seperate the Concerns via Layers. Thus you would probably have Projects like eCommerce.Core, eCommerce.DataAccess and so on. Now these seperations are just at the level of code-organization and is effecient only while developing. When you are done with the application, you will have to publish them to a single server where you can no longer see the seperation in the production environment, right? Now, this is still a cool way to build applications. But let’s take a practical scenario. Our eCommerce API has, let’s say , endpoints for customer management and product management, pretty common, yeah? Now down the road, there is a small fix / enhancement in the code related to the Customer endpoint. If you had built using the Monolith Architecture, you would have to re-deploy the entire application again and go through several tests that guarentee that the new fix / enhancement did not break anything else. A DevOps Engineer would truly understand this pain. But if you had followed a Microservice Architecture, you would have made Seperate Components for Customers, Products and so on. 


Granularity Decision of Microservice Splitting in View of Maintainability ...

In practical service application, challenges come from both the service and the technique. This section draws a detailed summary of the features of the four architectures and analyzes their key distinctions (Table 1). In terms of hierarchy, monolithic and vertical architectures centralize functional modules of each hierarchy with high coupling degree; SOA uncouples multiple functional modules of vertical and horizontal hierarchies of three or more tiers, but public modules can only be shared on horizontal hierarchies, leading to unthorough uncoupling; the fully self-service flexibility achieved by simultaneous uncoupling on vertical and horizontal hierarchies represents the main characteristic of microservice architecture; however, when putting large projects into practice, development teams cannot comply with all the features and they must consider the integration of irreplaceable systems and promote the flexibility of full uncoupling within acceptable changing rate. The core role of microservice architecture is to cope with the growing service capability within the system and the increasingly complex interaction demands between systems.



Global Cybercrime Surging During Pandemic

The stress and uncertainty caused by the COVID-19 crisis is creating the ideal environment for cybercriminals looking to cash in or create chaos. "Given the impact and scale of COVID-19, cyberattacks related to organizations involved in COVID-19 research or those firms providing relief services have continued to evolve, morph and expand," says Stanley Mierzwa, director of the Center for Cybersecurity at Kean University in Union, New Jersey. "Threat actors will continue to look for areas of vulnerability, and this could potentially reside in 'local' or 'satellite' offices of larger global for-profit, non-profit and non-governmental organizations that may not be utilizing centrally managed or administered systems," Mierzwa says. Craig Jones, who leads the global cybercrime program for Interpol, said in a recent interview with Information Security Media Group: "Certainly in relation to the COVID-19 pandemic, we're seeing a unique combination of events that have led to a whole range of specific criminal opportunities." Criminals haven't shied away from attempting to seize those opportunities, as demonstrated by their rush to rebrand attacks and even "fake news" campaigns to give them a COVID-19 theme, as well as unleash scams involving personal protective equipment, he told ISMG.


Fixing the Biggest IoT Issue — Data Security

By removing the latency and bandwidth scaling issues of cloud-based intelligence, it paves the way for a far more natural human–machine interaction. In the smart home, for example, the AIoT brings a whole new dimension to home control. By coupling voice with human sensing technology, such as presence detection and biometrics, we can build a multi-modal interaction that delivers an energy efficient and seamless, personalised experience. The TV will know when you’re in the room and ‘wake’ to a standby mode, it will know who you are and on hearing the wake word, greet you with familiarity and deliver your preferred settings. This kind of interaction also has clear applications across smart cities. Multi-modal sensing opens the path for significant steps forward in safety, security and energy efficiency. Let’s take the humble streetlight: the inclusion of human presence detection would enable it to light up only when a pedestrian or cyclist is in the vicinity. Add in voice control and the lamppost can detect a cry for help — of even the sound of glass breaking, triggering a call to the emergency services for assistance. In offices and public buildings, we won’t need to push buttons on elevators or hunt in our bags for a lift pass, instead our biometrics will form our signature for access, enabling a secure and convenient experience.


Exploring the Forgotten Roots of 'Cyber'

"What does 'cyber' even mean? And where does it come from?" writes Thomas Rid in "Rise of the Machines," his book-length quest to unravel cyber's origin story. Everyone from military officers and spies, to bankers, hackers and scholars "all slapped the prefix 'cyber' in front of something else ... to make it sound more techy, more edgy, more timely, more compelling - and sometimes more ironic," writes Rid, who's a professor of political science at Johns Hopkins University. Cyber has cachet. Cyber inevitably seems to always be pointing to the future. But as Rid writes in his book, "the future of machines has a past," and cyber has long stood not just for a future, Utopian merging of humans and machines, but a potential dystopia as well. On the good side exists the potential offered by cyborg-like technologies that might one day, for example, enable humans with spinal injuries to walk again. Such technology may even facilitate the human colonization of Mars. For a view of the flip side, however, take the "Matrix's" rendering of a postapocalyptic hellhole in which humans have been made to unthinkingly serve machines.



Quote for the day:

"Many people think great entrepreneurs take risks. Great entrepreneurs mitigate risks." -- James Altucher

Daily Tech Digest - August 07, 2020

Intel investigating breach after 20GB of internal documents leak online

US chipmaker Intel is investigating a security breach after earlier today 20 GB of internal documents, with some marked "confidential" or "restricted secret," were uploaded online on file-sharing site MEGA. The data was published by Till Kottmann, a Swiss software engineer, who said he received the files from an anonymous hacker who claimed to have breached Intel earlier this year. Kottmann received the Intel leaks because he manages a very popular Telegram channel where he regularly publishes data that accidentally leaked online from major tech companies through misconfigured Git repositories, cloud servers, and online web portals. The Swiss engineer said today's leak represents the first part of a multi-part series of Intel-related leaks. ZDNet reviewed the content of today's files with security researchers who have previously analyzed Intel CPUs in past work, who deemed the leak authentic but didn't want to be named in this article due to ethical concerns of reviewing confidential data, and because of their ongoing relations with Intel. Per our analysis, the leaked files contained Intel intellectual property respective to the internal design of various chipsets.


Data Prep for Machine Learning: Normalization

Preparing data for use in a machine learning (ML) system is time consuming, tedious, and error prone. A reasonable rule of thumb is that data preparation requires at least 80 percent of the total time needed to create an ML system. There are three main phases of data preparation: cleaning; normalizing and encoding; and splitting. Each of the three phases has several steps. A good way to understand data normalization and see where this article is headed is to take a look at the screenshot of a demo program. The demo uses a small text file named people_clean.txt where each line represents one person. There are five fields/columns: sex, age, region, income, and political leaning. The "clean" in the file name indicates that the data has been standardized by removing missing values, and editing bad data so that all lines have the same format, but numeric values have not yet been normalized. The ultimate goal of a hypothetical ML system is to use the demo data to create a neural network model that predicts political leaning from sex, age, region, and income. The demo analyzes the age and income predictor fields, then normalizes those two fields using a technique called min-max normalization. The results are saved as a new file named people_normalized.


Microsoft Teams Patch Bypass Allows RCE

While Microsoft tried to cut off this vector as a conduit for remote code execution by restricting the ability to update Teams via a URL, it was not a complete fix, the researcher explained. “The updater allows local connections via a share or local folder for product updates,” Jayapaul said. “Initially, when I observed this finding, I figured it could still be used as a technique for lateral movement, however, I found the limitations added could be easily bypassed by pointing to an…SMB share.” Server Message Block (SMB) protocol is a network file sharing protocol. To exploit this, an attacker would need to drop a malicious file into an open shared folder – something that typically involves already having network access. However, to reduce this gating factor, an attacker can create a remote rather than local share. “This would allow them to download the remote payload and execute rather than trying to get the payload to a local share as an intermediary step,” Jayapaul said. Trustwave has published a proof-of-concept attack that uses Microsoft Teams Updater to download a payload – using known, common software called Samba to carry out remote downloading.


Federated learning improves how AI data is managed, thwarts data leakage

Researchers believe a shift in the way data is managed could allow more information to reach learning algorithms outside of a single institution, which could benefit the entire system. Penn Medicine researchers propose using a technique called federated learning that would allow users to train an algorithm across multiple decentralized data sources without having to actually exchange the data sets. Federated learning works by training an algorithm across many decentralized edge devices, as opposed running an analysis on data uploaded to one server. "The more data the computational model sees, the better it learns the problem, and the better it can address the question that it was designed to answer," said Spyridon Bakas, an instructor in the Perelman School of Medicine at the University of Pennsylvania, in a press release. Bakas is lead author of a study on the use of federated learning in medicine that was published in the journal Scientific Reports. "Traditionally, machine learning has used data from a single institution, and then it became apparent that those models do not perform or generalize well on data from other institutions," Bakas said.


10 Tools You Should Know As A Cybersecurity Engineer

Wireshark is the world’s best network analyzer tool. It is an open-source software that enables you to inspect real-time data on a live network. Wireshark can dissect packets of data into frames and segments giving you detailed information about the bits and bytes in a packet. Wireshark supports all major network protocols and media types. Wireshark can also be used as a packet sniffing tool if you are in a public network. Wireshark will have access to the entire network connected to a router. ... Netcat is a simple but powerful tool that can view and record data on a TCP or UDP network connections. Netcat functions as a back-end listener that allows for port scanning and port listening. You can also transfer files through Netcat or use it as a backdoor to your victim machine. This makes is a popular post-exploitation tool to establish connections after successful attacks. Netcat is also extensible given its capability to add scripting for larger or redundant tasks. In spite of the popularity of Netcat, it was not maintained actively by its community. The Nmap team built an updated version of Netcat called Ncat with features including support for SSL, IPv6, SOCKS, and HTTP proxies.


Hey software developers, you’re approaching machine learning the wrong way

Unfortunately, lots of folks who set out to learn Machine Learning today have the same experience I had when I was first introduced to Java. They’re given all the low-level details up front — layer architecture, back-propagation, dropout, etc — and come to think ML is really complicated and that maybe they should take a linear algebra class first, and give up. That’s a shame, because in the very near future, most software developers effectively using Machine Learning aren’t going to have to think or know about any of that low-level stuff. Just as we (usually) don’t write assembly or implement our own TCP stacks or encryption libraries, we’ll come to use ML as a tool and leave the implementation details to a small set of experts. At that point — after Machine Learning is “democratized” — developers will need to understand not implementation details but instead best practices in deploying these smart algorithms in the world. ... What makes Machine Learning algorithms distinct from standard software is that they’re probabilistic. Even a highly accurate model will be wrong some of the time, which means it’s not the right solution for lots of problems, especially on its own. Take ML-powered speech-to-text algorithms: it might be okay if occasionally, when you ask Alexa to “Turn off the music,” she instead sets your alarm for 4 AM.


Garmin Reportedly Paid a Ransom

WastedLocker, a ransomware strain that reportedly shut down Garmin's operations for several days in July, is designed to avoid security tools within infected devices, according to a technical analysis from Sophos. In June and July, several research firms published reports on WastedLocker, noting that the ransomware appears connected to the Evil Corp cybercrime group, originally known for its use of the Dridex banking Trojan. "Because WastedLocker has no known security vulnerabilities in how it performs its encryption, it's unlikely that Garmin obtained a working decryption key that fast in any other way but by paying the ransom," Chris Clements, vice president of solutions architecture for Cerberus Sentinel, tells ISMG. Fausto Oliveira, principal security architect at the security firm Acceptto, adds: "What I believe happened is that Garmin was unable to recover their services in a timely manner. Four days of disruption is too long if they are using any reliable type of backup and restore mechanisms. That might have been because their disaster recovery backup strategy failed or the invasion was to the extent that backup sources were compromised as well."


Splicing a Pause Button into Cloud Machines

Splice Machine was born in the days of Hadoop, and uses some of the same underlying data processing engines that were distributed in that platform. But Splice Machine has surpassed the capabilities of that earlier platform by ensuring tight integration with those engines in support of its customers enterprise AI initiatives, not to mention elastic scaling via Kubernetes. The way that Splice Machine engineered HBase (for storage) and Spark (for analytics), and its enablement of ACID capabilities for SQL transactions, are core differentiating factors that weigh in Splice Machine’s favor for being a platform on which to build real-time AI applications, according to Zweben. “Doing table scans as the basis of an analytical workload is abysmally slow in HBase, and so, in Splice Machine, we engineered at a very low level the access to the HBase storage with a wrapper of transactionality around it, so you’re only seeing what’s been committed in the database based on ACID semantics,” Zweben explained. “That goes under the cover at a very well-engineered level, looking at the HBase storage and grabbing that into Spark dataframes,” he continued. “We’ve engineered tightly integrated connectivity for performance. ...”


How Synthetic Data Accelerates Coronavirus Research

To access data at the speed required while also respecting the privacy and governance needs of patient data, Washington University at St. Louis, Jefferson Health in Philadelphia, and other healthcare organizations have opted for an alternative, using something called synthetic data. Gartner defines synthetic data as data that is "generated by applying a sampling technique to real-world data or by creating simulation scenarios where models and processes interact to create completely new data not directly taken from the real world." Here's how Payne describes it: "We can take a set of data from real world patients but then produce a synthetic derivative that statistically is identical to those patents' data. You can drill down to the individual role level and it will look like the data extracted from the EHR (electronic health record), but there's no mutual information that connects that data to the source data from which it is derived." Why is that so important? "From the legal and regulatory and technical standpoint, this is no longer potentially identifiable human subjects' data, so now our investigators can literally watch a training video and get access to the system," Payne said. "They can sign a data use agreement and immediately start iterating through their analysis."


Realtime APIs: Mike Amundsen on Designing for Speed and Observability

For systems to perform as required, data read and write patterns will frequently have to be reengineered. Amundsen suggested judicious use of caching results, which can remove the need to constantly query upstream services. Data may also need to be “staged” appropriately throughout the entire end-to-end request handling process. For example, caching results and data in localized points of presence (PoPs) via content delivery networks (CDNs), caching in an API gateway, and replication of data stores across availability zones (local data centers) and globally. For some high transaction throughput use cases, writes may have to be streamed to meet demand, for example, writing data locally or via a high throughput distributed logging system like Apache Kafka for writing to an external data store at a later point in time. Engineers may have to “rethink the network,” (respecting the eight fallacies of distributed computing), and design their cloud infrastructure to follow best practices relevant to their cloud vendor and application architecture. Decreasing request and response size may also be required to meet demands. This may be engineered in tandem with the ability to increase the message volume. 



Quote for the day:

"The secret of leadership is simple: Do what you believe in. Paint a picture of the future. Go there. People will follow." -- Seth Godin

Daily Tech Digest - August 06, 2020

It’s time to think differently about how to develop cloud computing talent

“Certifications help set a benchmark for a conversation, but we tend to verify during interviews. Personally, I’m far more interested in curiosity, a desire to solve a problem, being self-starters — this talent goes much further as you develop it,” he adds. Sean Farrington, SVP EMEA at Pluralsight, also believes that developing and maintaining cloud computing skills once talent is in place is a challenge. Businesses “need the ability to accurately map skill levels and proficiencies within teams and put in place tailored learning pathways to address knowledge gaps,” he says. Success in this requires a reassessment of how learning is undertaken. Pluralsight, for example, found that 40% of IT professionals prefer learning online, either through self-paced or instructor led courses, rather than in classroom-based setups. Commenting on this, Farrington adds: “Companies are nothing more than the sum of their parts, and so business leaders must listen to the needs of their employees and implement an appropriate learning environment. In this case, the ability to upskill on demand and in bite-sized chunks is likely to keep cloud computing talent motivated, current and project-ready.”


Working With Intelligence: How AI Will Reshape Remote Work

HR managers and associates are required to undertake many tasks that allow them to comply with legal requirements for hiring as well as the policies issued by their respective companies. Finding the right candidate can be a time-consuming process when all of these compliances are taken into account. However, businesses can create remote positions that ease the load for managers or in-house employees. One of the criticisms about WFH surrounds a business’ ability to monitor the productivity and quality of output from external workers. Fortunately, artificial intelligence and machine learning are on hand to help out. Team leaders, supervisors and managers alike can turn to machine learning programs to monitor staff performance in a non-invasive and accurate manner. More modern systems are capable of utilising information through survey-based tools in order to provide impartial performance reviews and deliver accurate reports that indicate respective employee strengths and weaknesses on a case-by-case basis. Here, technology takes the lead and creates a level of analysis that’s difficult to replicate through human management. This is especially true for companies with a large number of employees that work from remote locations.


Overcoming the Evolving DevOps Skills Gap

It’s clear that in-demand skills don’t always remain in vogue for very long. To help limit the variability in expertise needed from year to year, companies should invest in tools that don’t constantly require learning new techniques to operate and that can automate tasks whenever possible. For example, a growing number of companies work with multiple cloud providers to ensure their applications and services are always available. While a multi-cloud strategy offers benefits, it also likely means running different projects on different providers’ clouds. To limit the amount of skills needed, companies can select container tools that deploy easily to multiple cloud environments without significantly affecting application topology.  Furthermore, tools that automate repetitive processes can help your company reconcile a skills gap. Leveraging solutions that automate processes tied to risk, compliance, and governance can help people focus on their core responsibilities and objectives rather than conducting manual data analyses or attempting to learn data-privacy law.  Thoughtfully employing technology can also help close skill gaps. With everyone now working remotely, there are fewer opportunities for in-person training and mentoring.


Why developers are falling in love with functional programming

A function with clearly declared in- and outputs is one without side effects. And a function without side effects is a pure function. A very simple definition of functional programming is this: writing a program only in pure functions. Pure functions never modify variables, but only create new ones as an output. (I cheated a bit in the example above: it goes along the lines of functional programming, but still uses a global list. You can find better examples, but it was about the basic principle here.) Moreover, you can expect a certain output from a pure function with a given input. In contrast, an impure function may depend on some global variable; so the same input variables may lead to different outputs if the global variable is different. The latter can make debugging and maintaining code a lot harder. There’s an easy rule to spot side effects: as every function must have some kind of in- and output, function declarations that go without any in- or output must be impure. These are the first declarations that you might want to change if you’re adopting functional programming.


IoT Automation Trend Rides Next Wave of Machine Learning, Big Data

Automation takes on a different aspect when IoT data is introduced, according to Susan Foss, product manager for real-time visualization and analytics at Esri, the geographic information system (GIS) giant. What is different? “It’s the nature of the data being collected,” she said. “Organizations have never had this type of information before or at this granularity of time-space detail.” “Before it was more periodic. Now they have it in the form of a living, breathing, constant supply,” she added. That ushers in event processing architectures, changes the pace with which teams have to work with data, and augers more automation. Foss said Esri is working with users to connect fast-arriving IoT data to location data. The goal is to create immediate visualizations of data on a map. This requires, Foss said, “a delicate balance of compute horsepower against the incoming real-time data, as well as static data sources that might need to be used with it.” And, real-time activity mapping is going indoors in the face of the COVID-19 pandemic. To that end, Esri recently updated its ArcGIS Indoors offering with new space planning templates. The software uses beacons and Wi-Fi to collect data for display on a live map showing activity in offices and other physical plants. Clearly, such capabilities have special import in the wake of coronavirus.


The Right Way of Tracing AWS Lambda Functions

This increased distribution and interdependency is precisely why distributed tracing has grown to be so important and valuable. Distributed tracing is a monitoring practice that involves your services to collectively and collaboratively recording spans that describe the actions they take in servicing one request. The spans related to the same request are grouped in a trace. In order to keep track of which trace is being recorded, each service must include the trace context in its own requests towards other upstream services. In a nutshell, you can think of distributed tracing as a relay race, the discipline of track and field sports in which the athletes take turns running and passing one another the baton. In the analogy of distributed tracing as a relay race, each service is an athlete and the trace context is the baton: if one of the services drops it, or the handoff between services is not successful because, for example, they implement different distributed tracing protocols, the trace is broken.Another similarity between distributed tracing and relay is that, while each of the single segments of the race matters and can make you lose the race, you need to be fast in each segment to excel.


Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create

At the bottom of the threat hierarchy, the researchers listed some "low-concern" applications – the petty crime of AI, if you may. On top of fake reviews or fake art, the report also mentions burglar bots, small devices that could sneak into homes through letterboxes or cat flaps to relay information to a third party. Burglar bots might sound creepy, but they could be easily defeated – in fact, they could pretty much be stopped by a letterbox cage – and they couldn't scale. As such, the researchers don't expect that they will cause huge trouble anytime soon. The real danger, according to the report, lies in criminal applications of AI that could be easily shared and repeated once they are developed. UCL's Matthew Caldwell, first author of the report, said: "Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime." The marketisation of AI-enabled crime, therefore, might be just around the corner. 


Organic data-transfer technology holds promise for IoT

Significantly, point-to-point links using devices made of organic matter could solve some sustainability issues, according to the U.K.'s Newcastle University. The tech industry has long wrestled with questions about how to encourage and make economical the recycling of hard-to-breakdown traditional electronics. LEDs are full of heavy metals, for example. Increasingly rapid lifecycle upgrades have exacerbated the challenges, and as IoT deployments expand, those questions could become even more pressing. OLEDs could be a solution, but data rates haven't been that great—they're not as powerful. At Newcastle University, researchers believe a new type of OLED could enable the faster data speeds required in a VLC-driven IoT communications network. Significantly, the OLED would be sustainable, since OLEDs are natural, organic and free of eco-unfriendly heavy metals. OLEDs have achieved around 10 Mbps speeds with add-on equalization algorithms and wavelength division multiplexing, whereas eco-unfriendly LEDs churn a healthy 35 Gbps. Equalization is a process where a specific band's energy is increased or decreased to level things out and improve data rates and bandwidth.


What Is Fintech And How Does It Affect How I Bank?

Fintech helps expedite processes that once took days, weeks or even months, like requesting a credit score report or sending an international money transfer. Platforms like Upstart and TransferWise accomplish these tasks in a fraction of the time as was the norm even five years ago. There’s been speculation about how fintech might help expedite traditionally red-tape-bound processes like distributing economic stimulus funds. Fintech also holds the potential to improve financial inclusion: In some parts of the world, fintech fills needs for the unbanked, where governmental or institutional support is lacking. Part of the reason fintech has the ability to streamline traditionally clunky processes is because it’s based in ones and zeros versus human skills and opinions. While many fintech platforms include elements of both traditional brokers/advisors and algorithms, others help users navigate financially complex tasks without interacting with a real, live human at all. For instance, today’s consumers can bypass traditional bank branches for things like applying for a loan (Lending Club) or even a mortgage (Better.com). Casual investors no longer need to meet face-to-face with financial experts to painstakingly go over the ins and outs of their portfolios—they can peruse their options online, or even enlist the help of chatbots to make decisions.


People: The one constant in an ever-evolving time of change

Despite the emphasis on speed, however, it is important that people remain a constant, central focus of the process. As such, a new, broadly applicable approach to change management is necessary to ensure clients, customers, and employees reach and maintain success. The creation of innovative solutions, made possible by tapping into lucrative fintech partnerships and digital initiatives, should be focused on building a strong, organizational culture that will effectively support people through these changes. As we put people first in the change model, some of our partners, and one regional northeast bank in particular, recently reinforced how thinking outside the box can pay it forward for change. The CEO utilized an innovative approach to leverage the role of bankers in the process. Rather than give the core responsibility to the digital and IT teams, he provided bankers and sales professionals a seat at the transformation table. By integrating front-end bankers into the core change management team, high performing bankers were able to think about front-end, client-facing concerns that other members of the team may not have experienced.



Quote for the day:

"Either write something worth reading or do something worth writing." -- Benjamin Franklin