Daily Tech Digest - August 12, 2020

Can behavioural banking drive financial literacy and inclusion?

In good times, the need to improve financial literacy is widely accepted by banking industry leaders and consumers alike. This important topic is regularly discussed by experts at the World Economic Forum and built into initiatives sponsored by the United Nations. Regarded as an economic good, financial literacy is critical to achieving financial inclusion. What about now, in decidedly less-than-good times? How are banks prepared to promote financial literacy for millennials and especially Gen Z, as they face a world in financial turmoil? ... The right systems helped the bank get up and running just 18 months after its initial launch announcement. Powerful, reliable technology also helped the company create a customer onboarding application that can open a new account within just five minutes. “The technology is extremely important for us,” says Frey. “It has to be fast, agile, and robust. We needed a solid workhorse with a huge amount of flexibility at the configuration level.” In 2020, Discovery will begin looking for ways to incorporate rapidly developing technologies such as artificial intelligence and machine learning into its solutions. Most important, however, is listening to customers and ensuring that the bank delivers the most pleasant, rewarding experience possible.


With DevOps, security is everybody’s responsibility. OK, so what’s next?

DevSecOps solutions are by nature designed to be preventative. The idea is to remove complexity by baking robust security methodologies into software development from the earliest stages. Get it right from the outset, and reactive firefighting is greatly reduced. Conveniently, this model – “shifting security left” to the coder rather than the expert in a fixed hierarchy – also makes sense when developing on cloud platforms that assume rapid deployment and collaboration. There is no development team, security team, or IT deployment team because they are one and the same person. In theory, that’s how security misconfigurations can be caught before they do harm. However, when it comes to cloud development, “shift left” is more talked about than practised. This situation has crept up on organisations that haven’t realised how programming culture has changed rapidly in the cloud era. “There is a lack of control in this model. With the shift into cloud development and the fact that coders can always get a better answer of Stack Overflow and GitHub, it’s become practically impossible to track the supply chain. It’s a governance problem,” says Guy Eisenkot


Surface Duo: Microsoft's $1,400 dual-screen Android phone coming September 10

Microsoft is counting on users seeing the Duo as filling an untapped niche. But for people used to thinking about carrying no more than two devices -- usually a PC/tablet or phone -- where does the Duo fit? In its first iteration, with a seemingly mediocre 11 MP camera, an older Snapdragon 855 processor and a relatively heavy form factor (about half a pound), the Duo is not going to replace my Pixel 3XL Android phone. And with a total screen size when open of 8.1 inches, the Duo is just too small to replace my PC. Panay and team are touting the Duo as a device that will give people a better way to get things done, to create and to connect. As was the case with the currently postponed, Windows 10X-based Surface Neo device, Microsoft's contention is two separate screens connected via a hinge help people work smarter and faster than they could with a single screen of any size. Officials say they've got research and years of work that backs up this claim. I do think more screen is better for almost everything, but for now, I am having trouble buying the idea that a hinge/division in the middle of two screens is going to make any kind of magic happen in my brain.


The clear Sky strategy

You need to have your eyes to the horizon and your feet on the floor. At all times. And it’s quite a discipline to do that. You see a lot of people who are consumed about managing the now, and then if you look at the last few months, there’s not been a lot of forward thinking. Then you also see other people who, perhaps the longer they are in their roles, spend more and more time thinking about the future horizon. That’s all very alluring and appealing, but they disconnect with the immediacy of what’s important today. You must try to think of both of those things and also encourage everybody else to think of their own role in that way. So, if you’re in broadcast technology today and you’re running that function or department, how do you get your colleagues to look at the future broadcast technologies and at the same time equip people to shoot with their iPhones and get the news out quickly? What you end up with is this networked brain. Everybody in Sky should be thinking about where the company should go, but also “How do I personally make sure I’m doing what is needed?”


Did Intel fail to protect proprietary secrets, or misconfigure servers? Lessons from the leak

Regardless of the circumstances, there are key takeaways from the incident. First and foremost, the unauthorized disclosure of source code and other sensitive intellectual property could potentially be a boon for those seeking to steal corporate secrets. “Intel’s technology is almost ubiquitous, and the leaked device designs and firmware source code can put businesses and individuals at risk,” said Ilia Sotnikov, VP of product management at Netwrix. “Hackers and Intel’s own security research team are probably racing now to identify flaws in the leaked source code that can be exploited. Companies should take steps to identify what technology may be impacted and stay tuned for advisory and hotfix announcements from Intel.” “While we often think of data breaches in the context of customer data lost and potential PII leakage, it is very important that we also consider the value of intellectual property, especially for very innovative organizations and organizations with a large market share,” said Erich Kron, security awareness advocate at KnowBe4. This intellectual property can be very valuable to potential competitors, and even nation states, who often hope to capitalize on the research and development done by others.”


Researchers Trick Facial-Recognition Systems

The model then continuously created and tested fake images of the two individuals by blending the facial features of both subjects. Over hundreds of training loops, the machine-learning model eventually got to a point where it was generating images that looked like a valid passport photo of one of the individuals: even as the facial recognition system identified the photo as the other person. Povolny says the passport-verification system attack scenario — though not the primary focus of the research — is theoretically possible to carry out. Because digital passport photos are now accepted, an attacker can produce a fake image of an accomplice, submit a passport application, and have the image saved in the passport database. So if a live photo of the attacker later gets taken at an airport — at an automated passport-verification kiosk, for instance — the image would be identified as that of the accomplice. "This does not require the attacker to have any access at all to the passport system; simply that the passport-system database contains the photo of the accomplice submitted when they apply for the passport," he says.


The problems AI has today go back centuries

The ties between algorithmic discrimination and colonial racism are perhaps the most obvious: algorithms built to automate procedures and trained on data within a racially unjust society end up replicating those racist outcomes in their results. But much of the scholarship on this type of harm from AI focuses on examples in the US. Examining it in the context of coloniality allows for a global perspective: America isn’t the only place with social inequities. “There are always groups that are identified and subjected,” Isaac says. The phenomenon of ghost work, the invisible data labor required to support AI innovation, neatly extends the historical economic relationship between colonizer and colonized. Many former US and UK colonies—the Philippines, Kenya, and India—have become ghost-working hubs for US and UK companies. The countries’ cheap, English-speaking labor forces, which make them a natural fit for data work, exist because of their colonial histories. AI systems are sometimes tried out on more vulnerable groups before being implemented for “real” users. Cambridge Analytica, for example, beta-tested its algorithms on the 2015 


The State of AI-Driven Digital Transformation

Governments are transforming service delivery through AI as well. In China, a number of AI pilot programmes are rolling out across the court system, including an “AI robot” that can answer legal questions in real time, tools to automate evidence analysis and the automated transcribing of court proceedings that would remove the need for judicial clerks to double as stenographers. These technological developments point to a future in which routine court procedures are mostly handled by machines, so that judges can reserve their attention for more complex and demanding cases. The other major use of AI would be in the areas of security and data privacy. In fact, the Forrester study found that 61 percent of firms in APAC are already enhancing or implementing their data privacy and security-related capabilities using AI. For example, financial services giant AXA IT has been leveraging machine learning and AI to thwart online security threats. They’ve partnered with cybersecurity firm Darktrace whose Enterprise Immune System learns how normal users behave so as to detect dangerous anomalies with the help of AI. Data lie at the heart of AI. The success of AI-driven digital transformation, therefore, relies greatly on the ability to draw insights from big data. 


How to Keep APIs Secure From Bot Attacks

Many APIs do not check authentication status when the request comes from a genuine user. Attackers exploit such flaws in different ways, such as session hijacking and account aggregation, to imitate genuine API calls. Attackers also reverse engineer mobile applications to discover how APIs are invoked. If API keys are embedded into the application, an API breach may occur. API keys should not be used for user authentication. Cybercriminals also perform credential stuffing attacks to takeover user accounts. ... Many APIs lack robust encryption between the API client and server. Attackers exploit vulnerabilities through man-in-the-middle attacks. Attackers intercept unencrypted or poorly protected API transactions to steal sensitive information or alter transaction data. Also, the ubiquitous use of mobile devices, cloud systems and microservice patterns further complicate API security because multiple gateways are now involved in facilitating interoperability among diverse web applications. The encryption of data flowing through all these channels is paramount. ... APIs are vulnerable to business logic abuse. This is exactly why a dedicated bot management solution is required and why applying detection heuristics that are good for both web and mobile apps can generate many errors — false positives and false negatives.


Blazor vs Angular

Blazor is also a framework that enables you to build client web applications that run in the browser, but using C# instead of TypeScript. When you create a new Blazor app it arrives with a few carefully selected packages (the essentials needed to make everything work) and you can install additional packages using NuGet. From here, you build your app as a series of components, using the Razor markup language, with your UI logic written using C#. The browser can't run C# code directly, so just like the Angular AOT approach you'll lean on the C# compiler to compile your C# and Razor code into a series of .dll files. To publish your app, you can use dot net's built-in publish command, which bundles up your application into a number of files (HTML, CSS, JavaScript and DLLs), which can then be published to any web server that can serve static files. When a user accesses your Blazor WASM application, a Blazor JavaScript file takes over, which downloads the .NET runtime, your application and its dependencies before running your app using WebAssembly. Blazor then takes care of updating the DOM, rendering elements and forwarding events (such as button clicks) to your application code.


AI company pivots to helping people who lost their job find a new source of health insurance

In addition to making health insurance somewhat easier to get, the Affordable Care Act funded navigators who helped individuals choose the right insurance plan. The Trump administration cut funding for the navigators from $63 million in 2016 to $10 million in 2018. During the 2019 open enrollment period for the federal ACA health insurance marketplace, overall enrollment dropped by 306,000 people. "While that may not seem like a lot, the average annual medical expense is around $3,000 per person, and a shortfall of covered patients could represent over $900,000,000 of medical expenses will not be paid by health insurance," Showalter said. When states banned elective medical procedures temporarily during the early months of the pandemic, this cut off an important revenue stream for hospitals and many laid off workers. Some of these layoffs included patient navigators who helped patients enroll in health insurance, particularly Medicaid.  Showalter said that all Jvion customers have had at least a few navigators on staff but not enough to reach every patient in need of assistance.



Quote for the day:

"A good general not only sees the way to victory; he also knows when victory is impossible." -- Polybius

Daily Tech Digest - August 11, 2020

How AI can create self-driving data centers

Data centers are full of physical equipment that needs regular maintenance. AI systems can go beyond scheduled maintenance and help with the collection and analysis of telemetry data that can pinpoint specific areas that require immediate attention. "AI tools can sniff through all of that data and spot patterns, spot anomalies," Schulz says. "Health monitoring starts with checking if equipment is configured correctly and performing to expectations," Bizo adds. With hundreds or even thousands of IT cabinets with tens of thousands of components, such mundane tasks can be labor intensive, and thus not always performed in a timely and thorough fashion." He points out that predictive equipmen- failure modeling based on vast amounts of sensory data logs can "spot a looming component or equipment failure and assess whether it needs immediate maintenance to avoid any loss of capacity that might cause a service outage." Michael Bushong, vice president of enterprise and cloud marketing at Juniper Networks, argues that enterprise data-center operators should ignore some of the overpromises and hype associated with AI, and focus on what he calls "boring innovations."


Hackers Could Use IoT Botnets to Manipulate Energy Markets

Unlike regular IoT botnets that are ubiquitous and available for hire on criminal forums, high-wattage botnets are not as practical to amass. None are known to be available for rent by would-be attackers. But over the past couple of years, researchers have begun investigating how they could be weaponized—one example looked at the possibility of mass blackouts—in anticipation that such botnets will someday emerge. Meanwhile, the idea of energy market manipulation in general is not far-fetched. The US Federal Energy Regulatory Commission investigated 16 potential market manipulation cases in 2018, though it closed 14 of them with no action. Additionally, in mid-May, attackers breached the IT systems of Elexon, the platform used to run the United Kingdom's energy market. The attack did not appear to result in market changes. The researchers caution that, based on their analysis, much smaller demand fluctuations than you might expect could affect pricing, and that it would take as few as 50,000 infected devices to pull off an impactful attack. In contrast, many current criminal IoT botnets contain millions of bots. Consumers whose devices are unwittingly conscripted into a high-wattage botnet would also be unlikely to notice anything amiss; attackers could intentionally turn on devices to pull power late at night or while people are likely to be out of the house. 


How to refactor God objects in C#

When multiple responsibilities are assigned to a single class, the class violates the single responsibility principle. Again, such classes are difficult to maintain, extend, test, and integrate with other parts of the application. The single responsibility principle states that a class (or subsystem, module, or function) should have one and only one reason for change. If there are two reasons for a class to change, the functionality should be split into two classes with each class handling one responsibility.  Some of the benefits of the single responsibility principle include orthogonality, high cohesion, and low coupling. A system is orthogonal if changing a component changes the state of that component only, and doesn’t affect other components of the application. In other words, the term orthogonality means that operations change only one thing without affecting other parts of the code. Coupling is defined as the degree of interdependence that exists between software modules and how closely they are connected to each other. When this coupling is high, then the software modules are interdependent, i.e., they cannot function without each other.


Cloud storage costs: How to get cloud storage bills under control

Cloud storage is not just about how much it costs per gigabyte stored. There are also costs associated with the transfer of data in and out of the cloud. In many services there are two costs: one per-GB each time servers in different domains communicate with each other, and another per-GB cost to transfer data over the internet. “For example, in AWS [Amazon Web Services], you are charged if you use a public IP address. Because you don’t buy dedicated bandwidth, there is an additional data transfer charge against each IP address – which can be a problem if you create websites and encourage people to download videos,” says Richard Blanford, CEO of IT service provider Fordway. “Every time a video is played you incur a charge, which will soon add up if several thousand people download your 100MB video.” ... The same issue applies with resilience and service recovery, where you will be charged for data traffic between domains to keep a second disaster recovery (DR) or failover environment in a different region or availability zone. “Moving data between regions and out of the public cloud also incurs a fee. Most companies that use a public cloud service pay this for day-to-day transactions, such as moving data from cloud-based storage to on-premise storage, and costs can quickly spiral as your tenancy grows,” says Blanford.


How Complexities Disturb and Improve Employee and Customer Experience

Regulations and laws often have good reasons to exist, especially in heavily regulated industries, such as banking, financials and pharma. Of course, we can find exceptions where these become difficult to justify. This rarely occurs when attentive leadership works to demonstrate how these externals either help the company or protect its stakeholders. When organizations can communicate consistently, they build buy-in and engagement. ... Develop a process that translates feedback into solutions. First, identify the balance in outcomes. If there are complexities that seem unnecessary, find out why they were designed or implemented in that way. Make sure that teams in agreement that the complexity is not solving issues as intended but aggravating others. Take steps toward a solution. If you cannot solve it, raise the error to your direct leadership or team. Identifying these sources of complexity is an incredible value, and an opportunity for the organization to improve. The fewer complexities, the more engaged employees become, the better experience an organization will deliver to its customers, partners and employees.


The Dark Side of Data

The ways that we use data have many inherent risks. There are hidden dangers in algorithmic decision making. Data is imperfect and algorithms often have built-in biases—biases that all too frequently have traumatic impacts on individuals and families as in this example of facial recognition gone wrong. Cathy O’Neil describes the risks of algorithmic dependencies and decisions in depth in her book Weapons of Math Destruction. We have yet to master data governance and data ethics. And now we need to step up to algorithmic governance and data science ethics. The abundance of data and the immense power that we have to process data bring both opportunity and risk. It is a grave error to pursue the opportunities without also making a serious commitment to managing the risks. Yes, data is informative and valuable—sometimes even invigorating and exciting. But we use it badly with too much attention to profits and too little attention to people. Data has a dark side of misuse and abuse. We are failing to step up to the real value opportunities of data—to improve the human condition—and we are failing to mitigate the risks inherent in modern data capabilities.


AI Recruiting Tools Aim to Reduce Bias in the Hiring Process

“One of the unintended consequences would be to continue this historical trend, particularly in tech, where underserved groups such as African Americans are not within a sector that happens to have a compensation that is much greater than others,” says Fay Cobb Payton, a professor of information technology and analytics at North Carolina State University, in Raleigh. “You’re talking about a wealth gap that persists because groups cannot enter [such sectors], be sustained, and play long term.” Payton and her colleagues highlighted several companies—including GapJumpers—that take an “intentional design justice” approach to hiring diverse IT talent in a paper published last year in the journal Online Information Review. According to the paper’s authors, there is a broad spectrum of possible actions that AI hiring tools can perform. Some tools may just provide general suggestions about what kind of candidate to hire, whereas others may recommend specific applicants to human recruiters, and some may even make active screening and selection decisions about candidates. But whatever the AI’s role in the hiring process, there is a need for humans to have the capability to evaluate the system’s decisions and possibly override them.


Data Bytes: Gartner on the IaaS Boom, Plus Cloud Geography, IoT Security

“Enterprises and providers must work together to prioritize and support IoT security requirements,” said Alexandra Rehak, Internet of Things Practice Head at Omdia. “Providers need to make sure IoT security solutions are simple and can be easily understood and integrated. Given how high a priority this is for enterprise end users, providers also need to do more to educate customers as well as providing technology solutions, to help ensure IoT security is not a barrier for adoption.” When it comes to the medium- to long-term focus for IoT industry leaders, 81% agreed that 5G would “transform” the industry. The top two benefits from 5G deployment are expected to be the ability to manage a massive number of IoT devices (67%) and the ability to achieve ultra-low latency (60%), allowing businesses to be even more agile. “COVID-19 is expected to impact IoT in 2020,” said Zach Butler, Director, IoT World. “Despite this, Omdia forecasts potential in some segments including connected health, as innovators use IoT technologies to tackle some of the pressing crises of the moment. Long-term, there is little doubt that 5G will change the face of IoT, particularly in the automotive and manufacturing sectors.


Ransomware: These warning signs could mean you are already under attack

Encryption of files by ransomware is the last thing that happens; before that, the crooks will spend weeks, or longer, investigating the network to discover weaknesses. One of the most common routes for ransomware gangs to make their way into corporate networks is via Remote Desktop Protocol (RDP) links left open to the internet. "Look at your environment and understand what your RDP exposure is, and make sure you have two-factor authentication on those links or have them behind a VPN," said Jared Phipps, VP at security company SentinelOne. Coronavirus lockdown means that more staff are working from home, and so more companies have opened up RDP links to make remote access easier. This is giving ransomware gangs an opening, Phipps said, so scanning your internet-facing systems for open RDP ports is a first step. Another warning sign could be unexpected software tools appearing on the network. Attackers may start with control of just one PC on a network – perhaps via a phishing email. With this toe-hold in the network, hackers will explore from there to see what else they can find to attack.


Creating a Progressive Web App with Blazor WebAssembly

The biggest challenges in designing your PWA are based on why you're creating a PWA. There are, at least, three reasons for creating a PWA. One is that you simply want your app to start faster (presumably a local copy of the app will start faster for your user than navigating to your site and downloading the app). Another good reason for creating a PWA is to reduce demands on the server: Because navigation between pages is handled locally, the demand on your server is reduced to just the REST requests that your app may make (especially if new versions of your app are downloaded from a different server, further reducing demand on your application server). A third reason (and probably the one that you're thinking of) is to enable your app to work even when the user doesn't have a network connection. Be aware: That option opens you up to a potential world of hurt. To begin with you need to realize that, while the user is offline, you won't be able to authenticate your user. You'll have to decide what functionality you'll provide to a "non-authenticated user" -- all you know about "non-authenticated" users is that they have successfully logged onto the device that your app is running on.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr

Daily Tech Digest - August 10, 2020

Computer vision: Why it’s hard to compare AI and human perception

In the seemingly endless quest to reconstruct human perception, the field that has become known as computer vision, deep learning has so far yielded the most favorable results. Convolutional neural networks (CNN), an architecture often used in computer vision deep learning algorithms, are accomplishing tasks that were extremely difficult with traditional software. However, comparing neural networks to the human perception remains a challenge. And this is partly because we still have a lot to learn about the human vision system and the human brain in general. The complex workings of deep learning systems also compound the problem. Deep neural networks work in very complicated ways that often confound their own creators. In recent years, a body of research has tried to evaluate the inner workings of neural networks and their robustness in handling real-world situations. ... The researchers note that the human visual system is naturally pre-trained on large amounts of abstract visual reasoning tasks. This makes it unfair to test the deep learning model on a low-data regime, and it is almost impossible to draw solid conclusions about differences in the internal information processing of humans and AI.


How To Close The Distance On Remote Work: The Most Important Leadership Skill

In terms of mindset, your perspective is important. One of my colleagues (an especially responsive leader herself) says her grandmother has a gift for making each grandchild feel valued and unique. Great leadership is like this as well. While no one should play favorites, it’s powerful for each team member to feel they matter and know you appreciate them and their contribution. When you give people responsibility and trust them to do good work, you won’t have to be as involved in the work they’re doing. Your time will be spent coaching, developing and making decisions where your perspective or position are most critical. You should set guardrails—for example spending more than a certain amount of money or which key topics require your input or decision-making—but within those boundaries, set people free. By not being too deeply in the details, you’ll have more time to be accessible where you’re needed most. Another mindset to help you be more responsive is to know your people well. When you have a good sense of what motivates each employee and what their unique needs are, you’re able to tune your messages. You’ll be more responsive when you’re able to meet employees where they are and provide the information or direction they need most.


2035's Biggest AI Threat Is Already Here

Unlike a robot siege that might damage property, the harm caused by these deep fakes was the erosion of trust in people and society itself. The threat of A.I. may seem to be forever stuck in the future — after all, how can A.I. harm us when my Alexa can't even correctly give a weather report? — but Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL which funded the study, explains that these threats will only continue to grow in sophistication and entanglement with our daily lives. "We live in an ever-changing world which creates new opportunities - good and bad," Johnson warns. "As such, it is imperative that we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new 'crime harvests' occur." While the authors concede that the judgments made in this study are inherently speculative in nature and influenced by our current political and technical landscape, they argue that the future of these technologies cannot be removed for those environments either. HOW DID THEY DO IT — In order to make these futuristic judgments, the researchers gathered a team of 14 academics in related fields, seven experts from the private sector, and 10 experts from the public sector.


Fintech 2020: 5 trends shaping the future of the industry

One thing a consumer prefers the most would be, multiple services across one platform. Many Fintech brands have already rolled out this process of offering multiple services across one app, but the increase in offerings of robust solutions through powerful API integrations will add on. In the coming days, consumers who need banking services are likely to turn to those financial players, who can offer convenience and ease of transactions that is entirely safe and secure. To address these consumer needs, banks cannot do much, but technology can help a lot in digitalizing consumer demand. Blockchain and Big Data are two technologies in full swing, but they are also two complementary technologies. According to experts, brands adopting burgeoning blockchain technology will benefit the most. Financial services will be able to reduce fraudulent activities, phishing attacks and ensure secure payments. One of the other things that Fintech needs to bring their attention to is—Artificial Intelligence, Machine Learning and Data Analytics. As all these can help financial services in addressing their key challenges like cost reduction and scrutinize risky transactions.


The dark side of Israeli cybersecurity firms

The common denominator of these companies is their definition as cybersecurity firms. "The law doesn't allow companies or individuals to get involved with offensive cyber," according to Dr. Harel Menashri, head of the cyber department at the Holon Institute of Technology, who was a co-founder of the Shin Bet Cyber Warfare Unit. "The Israeli cyber industry has made itself a good name regarding advanced capabilities ... One of the greatest advantages of the Israeli culture is the ability to develop and move around things very quickly. Even if I didn't serve in the same unit with someone who I'm interested in, I'll probably know someone who did," Menashri added. "Israelis gain their technological knowledge during their military service through units like 8200 and the cyber units of Shin Bet and the Mossad. That knowledge is a weapon, and today, quite a few IDF veterans from intelligence units move abroad and share their knowledge with foreign parties." Menshari gave the example of a group of young Israelis who had graduated the IDF's elite Unit 8200 and a few months ago decided to go and work for the UAE-based intelligence firm Dark Matter after being tempted by large sums of money.


How to Build an Accessibility-First Design Culture

A great place to begin is your component library. Identify which components are used the most often and which underlying components underpin other functions. For example, make sure buttons, inputs and links have accessible focus and hover states. It’s a lucrative, efficient way of scaling accessibility fixes because once you make one fix, you’ll see it propagate throughout the organization wherever that component is used. There are a few key factors to be aware of at this stage. First, create a clear plan for who can make changes and how you’re testing components to ensure accessibility features are not unintentionally removed. Second, your work doesn’t end after creating accessible components. In the UI, individual components are put together like puzzle pieces, and just because each piece is accessible doesn’t mean the entire UI will be. Since the UI involves multiple components talking to each other, you’ll need to ensure that the experience is usable and accessible as a whole. The goal is to ensure every existing and new component in a library is accessible by default. This way, when developers pull features into their work, they’ll know with certainty it’s designed to be accessible. Get it right once, and you get it right everywhere.


Powering the Era of Smart Cities

A priority for cities in the years to come will be reducing air pollution levels. This is already a major concern – nine in ten people breathe polluted air resulting in seven million deaths every year, according to the World Health Organisation. As city populations and traffic volumes boom, the role of smart technology in tackling pollution will be crucial. While data on emissions and congestion has been available for some time, only recently have we been able to build a full picture of its reach and harm. Fusing data from various sources can reveal new insights to be used to manage energy use and minimise pollution. For example, IoT sensor technology can intelligently detect when there is little or even no pedestrian or road traffic, dimming streetlights autonomously and saving energy. By crunching vehicle rates in real time, as well as pressure, temperature and humidity, air quality levels can be accurately predicted and mapped. This provides the insight to proactively adapt traffic controls and mitigate harm. As always, the smart move is to analyse and adopt best practices from other cities and nations. Singapore, for example, is generally considered to be the global smart city leader, much due to significant government investments in digital innovation and connected technologies.


The future of tech in healthcare: wearables?

IoT and wearable devices are ideally placed to transform the management of both preventable and chronic diseases and represent a big opportunity for digital to disrupt the industry. Data on human health can now be collated to a level and scale that was never before possible, while innovations in machine learning and adaptive algorithms provide credible predictors for the risk of diseases. Such data gives us actionable insight, empowering us to make small but significant changes to lifestyle habits so we may work towards living a longer, healthier life. The opportunity, however, does not come without challenges, and two of the biggest obstacles that must be negotiated lie in the budgetary and the clinical. On the financial side, the system either lives or dies depending on whether doctors have the additional time and expertise to interpret and implement a treatment plan based on the assessment of vast reams of data. On the clinical side, non-medically graded user-generated data makes it challenging for a doctor to include this within the overall treatment decision-making process. The strength of AI and machine learning, of course, is that it can cope with large amounts of data and find statistical correlations where they exist.


Microsoft unveils Open Service Mesh, vows donation to CNCF

Open Service Mesh builds on SMI, which is expressly not a service mesh implementation, but rather a set of standard API specifications designed within CNCF. If followed, the specs allow service mesh interoperability across multiple types of networks, including other service meshes, and public, private and hybrid clouds. The service mesh layer will be a key component of broadly accessible, real-world multi-cloud container portability as mainstream enterprise cloud-native applications advance, Pullen said. “Service mesh should help that, theoretically, especially if there’s standardization of it, but it’s going to require an interesting rework to make any Docker container compatible with any container cluster,” he said. “It’s more than putting something in Docker, it’s about that ability to route services in a somewhat decoupled way.” Simplicity and ease of use was also a point of emphasis in Microsoft’s OSM rollout, which analysts said seemed to target another common complaint about operational complexity among early adopters of Istio. OSM, by contrast, will build in some services that have been complex for service mesh early adopters to set up themselves, such as mutual TLS authentication.


Understanding What Good Agile Looks Like

Agile management began as a work of passion. It was born of a fierce desire felt by disgruntled software developers to set things right. Their Agile Manifesto (2001) not only succeeded in its modest goal of "uncovering better ways of developing software.” It had the unintended consequence of generating a candidate as the paradigm for 2020 management generally. Thus, Agile management began with exploring more nimble processes for one team, then several teams, then many teams and then the whole organization. It set in train the emergence of firms like Amazon and Google that not only showered benefits on their customers and users but also, for better or worse, developed the capacity to dominate the entire planet. As society now struggles to decide what to do about these new behemoths, it is useful to keep their possible flaws conceptually separate from the principles, processes and practices that enabled them to grow so fast. We need to keep in mind what good Agile looks like—essentially a better way for human beings to create more value other human beings. In any established organization, a small set of fairly stable principles (also known as mindset or management model) tends to guide decision-making throughout the organization. 



Quote for the day:

“Strength and growth come only through continuous effort and struggle.” -- Napoleon Hill

Daily Tech Digest - August 09, 2020

Grassroots Data Security: Leveraging User Knowledge to Set Policy

Today, the IT team owns the entire problem. They write rules to discover and characterize content (What is this file? Do we care about it?). They write more rules to evaluate that content (Is it stored in the right place? Is it marked correctly?). Then they write still more rules to enforce a policy (block, quarantine, encrypt, log). Unsurprisingly, complexity, maintenance overhead, false positives and security lapses are inevitable. It turns out data security policies are already defined. They’re hiding in plain sight. That’s because content creators are also the content experts and they’re demonstrating policy as they go. A sales team, for example, manages hundreds of quotes, contracts and other sensitive documents. The way they mark, store, share and use them defines an implicit data security policy. Every group of similar documents has an implicit policy defined by the expert content creators themselves. The problem, of course, is how to extract that grassroots wisdom. Deep learning gives us two tools to do it: representation learning and anomaly detection. Representation learning is the ability to process large amounts of information about a group of “things” (files in our case) and categorize those things. For data security, advances in natural language processing now give us insights into a document’s meaning that are far richer and more accurate than simple keyword matches.


IoT governance: how to deal with the compliance and security challenges

According to Ted Wagner, CISO at SAP NS2, the topics that should be included in any IoT governance program are “software and hardware vulnerabilities, and compliance with security requirements — whether they be regulatory or policy based.” He refers to a typical use case of when a software flaw is discovered within an IoT device. In this instance, it is important to determine the severity of the flaw. Could it lead to a security incident? How quickly does it need to be addressed? If there is no way to patch the software, is there another way to protect the device or mitigate the risk? “A good way to deal with IoT governance is to have a board as a governance structure. Proposals are presented to the board, which is normally made up of 6-12 individuals who discuss the merits of any new proposal or change. They may monitor ongoing risks like software vulnerabilities by receiving periodic vulnerability reports that include trends or metrics on vulnerabilities. Some boards have a lot of authority, while others may act as an advisory function to an executive or a decision maker,” Wagner advises.


Smart locks opened with nothing more than a MAC address

Young reached out to U-Tec on November 10, 2019, with his findings. The company told Young not to worry in the beginning, claiming that "unauthorized users will not be able to open the door." The cybersecurity researcher then provided them with a screenshot of the Shodan scrape, revealing active customer email addresses leaked in the form of MQTT topic names. Within a day, the U-Tec team made a few changes, including the closure of an open port, adding rules to prevent non-authenticated users from subscribing to services, and "turning off non-authenticated user access." While an improvement, this did not resolve everything.  "The key problem here is that they focused on user authentication but failed to implement user-level access controls," Young commented. "I demonstrated that any free/anonymous account could connect and interact with devices from any other user. All that was necessary is to sniff the MQTT traffic generated by the app to recover a device-specific username and an MD5 digest which acts as a password." After being pushed further, U-Tec spent the next few days implementing user isolation protocols, resolving every issue reported by Tripwire within a week.


RPA competitors battle for a bigger prize: automation everywhere

Competitive dynamics are heating up. The two emergent leaders, Automation Anywhere Inc. and UiPath Inc., are separating from the pack. Large incumbent software vendors such as Microsoft Corp., IBM Corp. and SAP SE are entering the market and positioning RPA as a feature. Meanwhile, the legacy business process players continue to focus on taking their installed bases on a broader automation journey. However, all three of these constituents are on a collision course in our view where a deeper automation objective is the “north star.” First, we have expanded our thinking on the RPA total available market and we are extending this toward a broader automation agenda more consistent with buyer goals. In other words, the TAM is much larger than we initially thought and we’ll explain why. Second, we no longer see this as a winner-take-all or winner-take-most market. In this segment we’ll look deeper into the leaders and share some new data. In particular, although it appeared in our previous analysis that UiPath was running the table on the market, we see a more textured competitive dynamic setting up and the data suggests that other players, including Automation Anywhere and some of the larger incumbents, will challenge UiPath for leadership in this market. 


Unlocking Industry 4.0: Understanding IoT In The Age Of 5G

The challenge is not just about bandwidth. Different IoT systems will have different network requirements. Some devices will demand absolute reliability where low latency will be critical, while other use cases will see networks having to cope with a much higher density of connected devices than we’ve previously seen. For example, within a production plant, one day simple sensors might collect and store data and communicate to a gateway device that contains application logic. In other scenarios, IoT sensor data might need to be collected in real-time from sensors, RFID tags, tracking devices, even mobile phones across a wider area via 5G protocols. Bottom line: Future 5G networks could help enable a number of IoT and IIoT use cases and benefits in the manufacturing industry. Looking ahead, don’t be surprised if you see these five use cases transform with strong, reliable connectivity from multi-spectrum 5G networks currently being built and the introduction of compatible devices. With IoT/IIoT, manufacturers could connect production equipment and other machines, tools, and assets in factories and warehouses, providing managers and engineers with more visibility into production operations and any issues that might arise.


The case for microservices is upon us

For many businesses, monolithic architecture has been and will continue to be sufficient. However, with the rise of mobile browsing and the growing ubiquity of omnichannel service delivery, many businesses are finding their code libraries become more convoluted and difficult to maintain with each passing year.  As businesses scale and expand their business capabilities, they often run into the issue that the code behind their various components is too tightly bound in a monolithic structure. This makes it difficult to deploy updates and fixes because change cycles are tied together, which means they need to update the whole system at once instead of simply updating the single function that needs improvement.  Microservices architecture is one of the ways companies are overhauling their tech stacks to keep up with modern DevOps best practices and future proof their operations, making them more flexible and agile.  Given the rapid pace of change where technologies and consumer expectations are concerned, businesses that do not build capacity for agility and scalability into their business model are placing themselves at a disadvantage – particularly at a time when businesses are being forced to pivot frequently in response to widespread market instability.


Game of Microservices

A microservice works best when it has it's own private database (database per service). This ensures loose coupling with other services and the data integrity will be maintained i.e. each microservice controls and updates it's own data. ... A SAGA is a sequence of local transactions. In SAGA, a set of services work in tandem to execute a piece of functionality and each local transaction updates the data in one service and sends an event or a message that triggers the next transaction in other services. The architecture for microservices mandates (usually) the Database per Service paradigm. The monolithic approach though having it's own operational issues, it does deal with transactions very well. It truly offers a inherent mechanism to provide ACID transactions and also roll-back in cases of failure. In contrast, in the Microservices approach as we have distributed the data and the datasources based on the service, there might be cases where some transactions, spreads over multiple services. Achieving transactional guarantees in such cases is of high importance or else we tend to lose data consistency and the application can be in an unexpected state. A mechanism to ensure data consistency across services is following the SAGA approach. SAGA ensures data consistency across services.


Metadata Repository Basics: From Database to Data Architecture

While knowledge graphs have shown potential for the metadata repository to find relationship patterns among large amounts of information, some businesses want more from a metadata repository. Streaming data ingested into databases from social media and IoT sensors, also need to be described. According to a New Stack survey of 800 professionals developers, real-time data use has seen a significant increase. What does this mean for the metadata repository? Enterprises want metadata to show the who, what, why, when, and how of their data. The centralized metadata repository database answers these questions but remains too slow and cumbersome to handle large amounts of light-speed metadata. Knowledge graphs have the advantage of dealing with lots of data and quickly. However, knowledge graphs display only specific types of patterns in their metadata repository. Companies need another metadata repository tool. Here comes the data catalog, a metadata repository informing consumers what data lives in data systems and the context of this data. 


Why edge computing is forcing us to rethink software architectures

The perspective on cloud hardware has since shifted. The current generation of cloud focuses on expensive, high-performance hardware rather than cheap commoditised systems. For one, cloud hardware and data centre architectures are morphing into something resembling an HPC system or supercomputer. Networking has followed the same route, with technologies like infiniband EDR and photonics paving the way for ever greater bandwidth and tighter latencies between servers, while using backbones and virtual networks have led to improvements in the bandwidth between geographically distant cloud data centres. The other shift currently underway is in the layout of these platforms themselves. The cloud is morphing and merging into edge computing environments where data centres are deployed with significantly greater de-centralisation and distribution. Traditionally an entire continent may be served by a handful of cloud data centres. Edge computing moved these computing resources much closer to the end-user — virtually to every city or major town. The edge data centres of every major cloud provider are now integrated into their backbone providing a sophisticated, geographically dispersed grid.


The Importance of Reliability Engineering

SRE isn’t just a set of practices and policies—it’s a mentality on how to develop software in a culture free of blame. By embracing this new mindset, your team’s morale and camaraderie will improve, allowing everyone to work at their full potential in a psychologically safe environment. SRE teaches us that failure is inevitable. No matter how many precautions you take, incidents happen. While giving you the tools to respond effectively to these incidents, SRE also challenges us to celebrate these failures. When something new goes wrong, it means there’s a chance to learn about your systems. This attitude creates an environment of continuous learning.  When analyzing these inevitable incidents, it’s important to maintain an attitude of blamelessness. Instead of wasting time pointing fingers and finding fault, work together to find the systematic issues behind the incident. By avoiding a culture of blame and shame, engineers are less afraid to proactively raise issues. Team members will trust each other more, assuming good faith in their teammates’ choices. This spirit of blameless collaboration will transform the most challenging incidents into opportunities for growing stronger together.



Quote for the day:

"One must be convinced to convince, to have enthusiasm to stimulate the others." -- Stefan Zweig

Daily Tech Digest - August 08, 2020

Safeguarding the use of complex algorithms and machine learning

The immediate fallouts of algorithmic risks can include inappropriate and potential illegal decisions. And they can affect a range of functions, such as finance, sales and marketing, operations, risk management, information technology, and human resources. Algorithms operate at faster speeds in fully automated environments, and they become increasingly volatile as algorithms interact with other algorithms or social media platforms. Therefore, algorithmic risks can quickly get out of hand. Algorithmic risks can also carry broader and long-term implications across a range of risks, including reputation, financial, operational, regulatory, technology, and strategic risks. Given the potential for such long-term negative implications, it’s imperative that algorithmic risks be appropriately and effectively managed. ... A good starting point for implementing an algorithmic risk management framework is to ask important questions about the preparedness of your organization to manage algorithmic risks. For example: Does your organization have a good handle on where algorithms are deployed?; Have you evaluated the potential impact should those algorithms function improperly?; Does senior management within your organization understand the need to manage algorithmic risks?


How Decision Transformation is Essential to Digital Transformation

The human is better at telling them that the customers aren't happy. And the fact that the customer is unhappy is a crucial determinant in how the decision should be made. So instead of ignoring it in the automation, and throwing up an answer, and then having the person go, "Well that was a stupid answer because this customer is unhappy." Go ahead and ask the person, is the customer unhappy, and if they say yes or no, then use that as part of the decision making. So we find that there's often a role for humans in decision making, but it's often not this supervisory, "You make a suggestion, I'll override it if I feel like it kind of thing." And so we find you have to really understand the structure of your decision making before you can make those judgments. So we encourage people, when we're working with them—Look, let's understand the decision making first and let's understand all of it, automated pieces and the manual pieces. And once we understand all of it, then we can draw a suitable automation boundary to figure out which pieces to digitize, which technologies to use, and make it an integrated whole.


3 Daunting Ways Artificial Intelligence Will Transform The World Of Work

Even in seemingly non-tech companies (if there is such a thing in the future), the employee experience will change dramatically. For one thing, robots and cobots will have an increasing presence in many workplaces, particularly in manufacturing and warehousing environments. But even in office environments, workers will have to get used to AI tools as “co-workers.” From how people are recruited, to how they learn and develop in the job, to their everyday working activities, AI technology and smart machines will play an increasingly prominent role in the average person's working life. Just as we've all got used to tools like email, we'll also get used to routinely using tools that monitor workflows and processes and make intelligent suggestions about how things could be done more efficiently. Tools will emerge to carry out more and more repetitive admin tasks, such as arranging meetings and managing a diary. And, very likely, new tools will monitor how employees are working and flag up when someone is having trouble with a task or not following procedures correctly. On top of this, workforces will become decentralized  – which means the workers of the future can choose to live anywhere, rather than going where the work is.


Facebook open-sources one of Instagram's security tools

While most static analyzers look for a wide range of bugs, Pysa was specifically developed to look for security-related issues. More particularly, Pysa tracks "flows of data through a program." How data flows through a program's code is very important. Most security exploits today take advantage of unfiltered or uncontrolled data flows. For example, a remote code execution (RCE), one of today's worst types of bugs, when stripped down, is basically a user input that reaches unwanted portions of a codebase. Under the hood, Pysa aims to bring some insight into how data travels across codebases, and especially large codebases made up of hundreds of thousands or millions of lines of code. This concept isn't new and is something that Facebook has already perfected with Zoncolan, a static analyzer that Facebook released in August 2019 for Hack -- the PHP-like language variation that Facebook uses for the main Facebook app's codebase. Both Pysa and Zoncolan look for "sources" (where data enters a codebase) and "sinks" (where data ends up). Both tools track how data moves across a codebase, and find dangerous "sinks," such as functions that can execute code or retrieve sensitive user data.


Google’s New TF-Coder Tool Claims To Achieve Superhuman Performance

TF-Coder uses two ML models in order to predict the needed operations from features of the input/output tensors and a natural language description of the task. These predictions are then combined within a general framework to modify the weights to customise the search process for the given task.  The researchers introduced three key ideas in the synthesis algorithm. Firstly, they introduced per-operation weights to the prior algorithm, allowing TF-Coder to enumerate over TensorFlow expressions in order of increasing complexity. Secondly, they introduced a novel, flexible, and efficient type- and value-based filtering system that handles arbitrary constraints imposed by the TensorFlow library, such as “the two tensor arguments must have broadcastable shapes.” Finally, they developed a framework to combine predictions from multiple independent machine learning models that choose operations to prioritise during the search, conditioned on features of the input and output tensors and a natural language description of the task. The researchers evaluated TF-Coder on 70 real-world tensor transformation tasks from StackOverflow and from an industrial setting.


Microservice Architecture in ASP.NET Core with API Gateway

A traditional Approach would be do Build a Single Solution on Visual Studio and then Seperate the Concerns via Layers. Thus you would probably have Projects like eCommerce.Core, eCommerce.DataAccess and so on. Now these seperations are just at the level of code-organization and is effecient only while developing. When you are done with the application, you will have to publish them to a single server where you can no longer see the seperation in the production environment, right? Now, this is still a cool way to build applications. But let’s take a practical scenario. Our eCommerce API has, let’s say , endpoints for customer management and product management, pretty common, yeah? Now down the road, there is a small fix / enhancement in the code related to the Customer endpoint. If you had built using the Monolith Architecture, you would have to re-deploy the entire application again and go through several tests that guarentee that the new fix / enhancement did not break anything else. A DevOps Engineer would truly understand this pain. But if you had followed a Microservice Architecture, you would have made Seperate Components for Customers, Products and so on. 


Granularity Decision of Microservice Splitting in View of Maintainability ...

In practical service application, challenges come from both the service and the technique. This section draws a detailed summary of the features of the four architectures and analyzes their key distinctions (Table 1). In terms of hierarchy, monolithic and vertical architectures centralize functional modules of each hierarchy with high coupling degree; SOA uncouples multiple functional modules of vertical and horizontal hierarchies of three or more tiers, but public modules can only be shared on horizontal hierarchies, leading to unthorough uncoupling; the fully self-service flexibility achieved by simultaneous uncoupling on vertical and horizontal hierarchies represents the main characteristic of microservice architecture; however, when putting large projects into practice, development teams cannot comply with all the features and they must consider the integration of irreplaceable systems and promote the flexibility of full uncoupling within acceptable changing rate. The core role of microservice architecture is to cope with the growing service capability within the system and the increasingly complex interaction demands between systems.



Global Cybercrime Surging During Pandemic

The stress and uncertainty caused by the COVID-19 crisis is creating the ideal environment for cybercriminals looking to cash in or create chaos. "Given the impact and scale of COVID-19, cyberattacks related to organizations involved in COVID-19 research or those firms providing relief services have continued to evolve, morph and expand," says Stanley Mierzwa, director of the Center for Cybersecurity at Kean University in Union, New Jersey. "Threat actors will continue to look for areas of vulnerability, and this could potentially reside in 'local' or 'satellite' offices of larger global for-profit, non-profit and non-governmental organizations that may not be utilizing centrally managed or administered systems," Mierzwa says. Craig Jones, who leads the global cybercrime program for Interpol, said in a recent interview with Information Security Media Group: "Certainly in relation to the COVID-19 pandemic, we're seeing a unique combination of events that have led to a whole range of specific criminal opportunities." Criminals haven't shied away from attempting to seize those opportunities, as demonstrated by their rush to rebrand attacks and even "fake news" campaigns to give them a COVID-19 theme, as well as unleash scams involving personal protective equipment, he told ISMG.


Fixing the Biggest IoT Issue — Data Security

By removing the latency and bandwidth scaling issues of cloud-based intelligence, it paves the way for a far more natural human–machine interaction. In the smart home, for example, the AIoT brings a whole new dimension to home control. By coupling voice with human sensing technology, such as presence detection and biometrics, we can build a multi-modal interaction that delivers an energy efficient and seamless, personalised experience. The TV will know when you’re in the room and ‘wake’ to a standby mode, it will know who you are and on hearing the wake word, greet you with familiarity and deliver your preferred settings. This kind of interaction also has clear applications across smart cities. Multi-modal sensing opens the path for significant steps forward in safety, security and energy efficiency. Let’s take the humble streetlight: the inclusion of human presence detection would enable it to light up only when a pedestrian or cyclist is in the vicinity. Add in voice control and the lamppost can detect a cry for help — of even the sound of glass breaking, triggering a call to the emergency services for assistance. In offices and public buildings, we won’t need to push buttons on elevators or hunt in our bags for a lift pass, instead our biometrics will form our signature for access, enabling a secure and convenient experience.


Exploring the Forgotten Roots of 'Cyber'

"What does 'cyber' even mean? And where does it come from?" writes Thomas Rid in "Rise of the Machines," his book-length quest to unravel cyber's origin story. Everyone from military officers and spies, to bankers, hackers and scholars "all slapped the prefix 'cyber' in front of something else ... to make it sound more techy, more edgy, more timely, more compelling - and sometimes more ironic," writes Rid, who's a professor of political science at Johns Hopkins University. Cyber has cachet. Cyber inevitably seems to always be pointing to the future. But as Rid writes in his book, "the future of machines has a past," and cyber has long stood not just for a future, Utopian merging of humans and machines, but a potential dystopia as well. On the good side exists the potential offered by cyborg-like technologies that might one day, for example, enable humans with spinal injuries to walk again. Such technology may even facilitate the human colonization of Mars. For a view of the flip side, however, take the "Matrix's" rendering of a postapocalyptic hellhole in which humans have been made to unthinkingly serve machines.



Quote for the day:

"Many people think great entrepreneurs take risks. Great entrepreneurs mitigate risks." -- James Altucher

Daily Tech Digest - August 07, 2020

Intel investigating breach after 20GB of internal documents leak online

US chipmaker Intel is investigating a security breach after earlier today 20 GB of internal documents, with some marked "confidential" or "restricted secret," were uploaded online on file-sharing site MEGA. The data was published by Till Kottmann, a Swiss software engineer, who said he received the files from an anonymous hacker who claimed to have breached Intel earlier this year. Kottmann received the Intel leaks because he manages a very popular Telegram channel where he regularly publishes data that accidentally leaked online from major tech companies through misconfigured Git repositories, cloud servers, and online web portals. The Swiss engineer said today's leak represents the first part of a multi-part series of Intel-related leaks. ZDNet reviewed the content of today's files with security researchers who have previously analyzed Intel CPUs in past work, who deemed the leak authentic but didn't want to be named in this article due to ethical concerns of reviewing confidential data, and because of their ongoing relations with Intel. Per our analysis, the leaked files contained Intel intellectual property respective to the internal design of various chipsets.


Data Prep for Machine Learning: Normalization

Preparing data for use in a machine learning (ML) system is time consuming, tedious, and error prone. A reasonable rule of thumb is that data preparation requires at least 80 percent of the total time needed to create an ML system. There are three main phases of data preparation: cleaning; normalizing and encoding; and splitting. Each of the three phases has several steps. A good way to understand data normalization and see where this article is headed is to take a look at the screenshot of a demo program. The demo uses a small text file named people_clean.txt where each line represents one person. There are five fields/columns: sex, age, region, income, and political leaning. The "clean" in the file name indicates that the data has been standardized by removing missing values, and editing bad data so that all lines have the same format, but numeric values have not yet been normalized. The ultimate goal of a hypothetical ML system is to use the demo data to create a neural network model that predicts political leaning from sex, age, region, and income. The demo analyzes the age and income predictor fields, then normalizes those two fields using a technique called min-max normalization. The results are saved as a new file named people_normalized.


Microsoft Teams Patch Bypass Allows RCE

While Microsoft tried to cut off this vector as a conduit for remote code execution by restricting the ability to update Teams via a URL, it was not a complete fix, the researcher explained. “The updater allows local connections via a share or local folder for product updates,” Jayapaul said. “Initially, when I observed this finding, I figured it could still be used as a technique for lateral movement, however, I found the limitations added could be easily bypassed by pointing to an…SMB share.” Server Message Block (SMB) protocol is a network file sharing protocol. To exploit this, an attacker would need to drop a malicious file into an open shared folder – something that typically involves already having network access. However, to reduce this gating factor, an attacker can create a remote rather than local share. “This would allow them to download the remote payload and execute rather than trying to get the payload to a local share as an intermediary step,” Jayapaul said. Trustwave has published a proof-of-concept attack that uses Microsoft Teams Updater to download a payload – using known, common software called Samba to carry out remote downloading.


Federated learning improves how AI data is managed, thwarts data leakage

Researchers believe a shift in the way data is managed could allow more information to reach learning algorithms outside of a single institution, which could benefit the entire system. Penn Medicine researchers propose using a technique called federated learning that would allow users to train an algorithm across multiple decentralized data sources without having to actually exchange the data sets. Federated learning works by training an algorithm across many decentralized edge devices, as opposed running an analysis on data uploaded to one server. "The more data the computational model sees, the better it learns the problem, and the better it can address the question that it was designed to answer," said Spyridon Bakas, an instructor in the Perelman School of Medicine at the University of Pennsylvania, in a press release. Bakas is lead author of a study on the use of federated learning in medicine that was published in the journal Scientific Reports. "Traditionally, machine learning has used data from a single institution, and then it became apparent that those models do not perform or generalize well on data from other institutions," Bakas said.


10 Tools You Should Know As A Cybersecurity Engineer

Wireshark is the world’s best network analyzer tool. It is an open-source software that enables you to inspect real-time data on a live network. Wireshark can dissect packets of data into frames and segments giving you detailed information about the bits and bytes in a packet. Wireshark supports all major network protocols and media types. Wireshark can also be used as a packet sniffing tool if you are in a public network. Wireshark will have access to the entire network connected to a router. ... Netcat is a simple but powerful tool that can view and record data on a TCP or UDP network connections. Netcat functions as a back-end listener that allows for port scanning and port listening. You can also transfer files through Netcat or use it as a backdoor to your victim machine. This makes is a popular post-exploitation tool to establish connections after successful attacks. Netcat is also extensible given its capability to add scripting for larger or redundant tasks. In spite of the popularity of Netcat, it was not maintained actively by its community. The Nmap team built an updated version of Netcat called Ncat with features including support for SSL, IPv6, SOCKS, and HTTP proxies.


Hey software developers, you’re approaching machine learning the wrong way

Unfortunately, lots of folks who set out to learn Machine Learning today have the same experience I had when I was first introduced to Java. They’re given all the low-level details up front — layer architecture, back-propagation, dropout, etc — and come to think ML is really complicated and that maybe they should take a linear algebra class first, and give up. That’s a shame, because in the very near future, most software developers effectively using Machine Learning aren’t going to have to think or know about any of that low-level stuff. Just as we (usually) don’t write assembly or implement our own TCP stacks or encryption libraries, we’ll come to use ML as a tool and leave the implementation details to a small set of experts. At that point — after Machine Learning is “democratized” — developers will need to understand not implementation details but instead best practices in deploying these smart algorithms in the world. ... What makes Machine Learning algorithms distinct from standard software is that they’re probabilistic. Even a highly accurate model will be wrong some of the time, which means it’s not the right solution for lots of problems, especially on its own. Take ML-powered speech-to-text algorithms: it might be okay if occasionally, when you ask Alexa to “Turn off the music,” she instead sets your alarm for 4 AM.


Garmin Reportedly Paid a Ransom

WastedLocker, a ransomware strain that reportedly shut down Garmin's operations for several days in July, is designed to avoid security tools within infected devices, according to a technical analysis from Sophos. In June and July, several research firms published reports on WastedLocker, noting that the ransomware appears connected to the Evil Corp cybercrime group, originally known for its use of the Dridex banking Trojan. "Because WastedLocker has no known security vulnerabilities in how it performs its encryption, it's unlikely that Garmin obtained a working decryption key that fast in any other way but by paying the ransom," Chris Clements, vice president of solutions architecture for Cerberus Sentinel, tells ISMG. Fausto Oliveira, principal security architect at the security firm Acceptto, adds: "What I believe happened is that Garmin was unable to recover their services in a timely manner. Four days of disruption is too long if they are using any reliable type of backup and restore mechanisms. That might have been because their disaster recovery backup strategy failed or the invasion was to the extent that backup sources were compromised as well."


Splicing a Pause Button into Cloud Machines

Splice Machine was born in the days of Hadoop, and uses some of the same underlying data processing engines that were distributed in that platform. But Splice Machine has surpassed the capabilities of that earlier platform by ensuring tight integration with those engines in support of its customers enterprise AI initiatives, not to mention elastic scaling via Kubernetes. The way that Splice Machine engineered HBase (for storage) and Spark (for analytics), and its enablement of ACID capabilities for SQL transactions, are core differentiating factors that weigh in Splice Machine’s favor for being a platform on which to build real-time AI applications, according to Zweben. “Doing table scans as the basis of an analytical workload is abysmally slow in HBase, and so, in Splice Machine, we engineered at a very low level the access to the HBase storage with a wrapper of transactionality around it, so you’re only seeing what’s been committed in the database based on ACID semantics,” Zweben explained. “That goes under the cover at a very well-engineered level, looking at the HBase storage and grabbing that into Spark dataframes,” he continued. “We’ve engineered tightly integrated connectivity for performance. ...”


How Synthetic Data Accelerates Coronavirus Research

To access data at the speed required while also respecting the privacy and governance needs of patient data, Washington University at St. Louis, Jefferson Health in Philadelphia, and other healthcare organizations have opted for an alternative, using something called synthetic data. Gartner defines synthetic data as data that is "generated by applying a sampling technique to real-world data or by creating simulation scenarios where models and processes interact to create completely new data not directly taken from the real world." Here's how Payne describes it: "We can take a set of data from real world patients but then produce a synthetic derivative that statistically is identical to those patents' data. You can drill down to the individual role level and it will look like the data extracted from the EHR (electronic health record), but there's no mutual information that connects that data to the source data from which it is derived." Why is that so important? "From the legal and regulatory and technical standpoint, this is no longer potentially identifiable human subjects' data, so now our investigators can literally watch a training video and get access to the system," Payne said. "They can sign a data use agreement and immediately start iterating through their analysis."


Realtime APIs: Mike Amundsen on Designing for Speed and Observability

For systems to perform as required, data read and write patterns will frequently have to be reengineered. Amundsen suggested judicious use of caching results, which can remove the need to constantly query upstream services. Data may also need to be “staged” appropriately throughout the entire end-to-end request handling process. For example, caching results and data in localized points of presence (PoPs) via content delivery networks (CDNs), caching in an API gateway, and replication of data stores across availability zones (local data centers) and globally. For some high transaction throughput use cases, writes may have to be streamed to meet demand, for example, writing data locally or via a high throughput distributed logging system like Apache Kafka for writing to an external data store at a later point in time. Engineers may have to “rethink the network,” (respecting the eight fallacies of distributed computing), and design their cloud infrastructure to follow best practices relevant to their cloud vendor and application architecture. Decreasing request and response size may also be required to meet demands. This may be engineered in tandem with the ability to increase the message volume. 



Quote for the day:

"The secret of leadership is simple: Do what you believe in. Paint a picture of the future. Go there. People will follow." -- Seth Godin