Daily Tech Digest - July 12, 2019

Reinforcement learning is an area of machine learning that has received lots of attention from researchers over the past decade. Benaich and Hogarth define it as being concerned with "software agents that learn goal-oriented behavior by trial and error in an environment that provides rewards or penalties in response to the agent's actions (called a "policy") towards achieving that goal." A good chunk of the progress made in RL has to do with training AI to play games, equaling or surpassing human performance. StarCraft II, Quake III Arena and Montezuma's revenge are just some of those games. More important than the sensationalist aspect of "AI beats humans", however, are the methods through which RL may reach such outcomes: Play driven learning, simulation and real-world combination, and curiosity-driven exploration. Can we train AI by playing games? As children, we acquire complex skills and behaviors by learning and practicing diverse strategies and behaviors in a low-risk fashion, i.e., play time. 


APT Groups Make Quadruple What They Spend on Attack Tools

"The potential benefit from an attack far exceeds the cost of a starter kit, says Leigh-Anne Galloway, cybersecurity resilience lead at Positive Technologies. For groups like Silence, the profit from one attack is typically more than quadruple the cost of the attack toolset, she says. The ROI for some APT groups can be many magnitudes higher. Positive Technologies, for instance, estimated that APT38, a profit-driven threat group with suspected backing from the North Korean government, spends more than $500,000 for carrying out attacks on financial institutions but gets over $41 million in return on average. A lot of the money that APT38 spends is on tools similar to those used by groups engaged in cyber espionage campaigns. Building an effective system of protection against APTs can be expensive, Galloway says. For most organizations that have experienced an APT attack, the cost of restoring infrastructure in many cases is the main item of expenditure. "It can be much more than direct financial damage from an attack," she says.


Smarter IoT concepts reveal creaking networks

Industry 4.0 - industrial IoT internet of things
"The internet, as we know it, is based on network architectures of the 70s and 80s, when it was designed for completely different applications,” the researchers say in their media release. The internet has centralized security, which causes choke points, and and an inherent lack of dynamic controls, which translates to inflexibility in access rights — all of which make it difficult to adapt the IoT to it. Device, data, and process management must be integrated into IoT systems, say the group behind the project, called DoRIoT (Dynamische Laufzeitumgebung für Organisch (dis-)Aggregierende IoT-Prozesse), translated as Dynamic Runtime Environment for Organic dis-Aggregating IoT Processes. “In order to close this gap, concepts [will be] developed in the project that transparently realize the access to the data,” says Professor Sebastian Zug of the University of Freiberg, a partner in DoRIoT. “For the application, it should make no difference whether the specific information requirement is answered by a server or an IoT node.”


Managing Third-Party Risks: CISOs' Success Strategies

Managing Third-Party Risks: CISOs' Success Strategies
As more organizations rely on third parties for various services, managing the security risks involved is becoming a bigger challenge. Those risks, indeed, can be significant. For example, earlier this year, Indian IT outsourcing giant Wipro was targeted by hackers who in turn launched phishing attacks against its customers. Among the toughest third-party risk management challenges are: Keeping track of the long list of outsourcers an organization uses and making sure they're assessed for security; Taking steps to minimize the amount of sensitive data that's shared with vendors - and making sure that data is adequately protected; and Holding vendors to a uniform standard for security. "For most organizations, there is still a long way to go in strengthening governance when it comes to vendor management," says Jagdeep Singh, CISO at InstaRem, a Singapore-based fintech company. "We need to look at the broader risk posture that vendors bring in ... which will determine the sort of due diligence you want to carry out."


To encourage an Agile enterprise architecture, software teams must devise a method to get bottom-up input and enforce consistency. Apply tenets of continuous integration and continuous delivery all the way to planning and architecture. With a dynamic roadmap, an organization can change its planning from an annual endeavor to a practically nonstop effort. Lufthansa Systems, a software and IT service provider for the airline industry under parent company Lufthansa, devised a layered approach to push customer demand into product architecture planning. Now, the company can continuously update and improve products, said George Lewe, who manages the company's roster of Atlassian tools that underpin the multi-team collaboration. "We get much more input from the customers -- really cool ideas," Lewe said. "Some requests might not fit into our product strategy or, for technical reasons, it's not possible, but we can look at all of them." Lufthansa Systems moved its support agents, product managers and software developers onto Atlassian Jira, a project tracking tool, with a tiered concept. 


What does the death of Hadoop mean for big data?

The Hadoop software framework, which facilitated distributed storage and processing of big data using the MapReduce programming model, served these data ambitions sufficiently. The modules in Hadoop were developed for computer clusters built from commodity hardware and eventually also found use on clusters of higher-end hardware. But the broader adoption of the open-source distributed storage technology that was invented by Google, however, did not come to be, as enterprises began opting to move to the cloud and explore AI, which included machine learning and deep learning as part of their big data initiative. Worse, several big Hadoop-based solution providers that had been unprofitable for years were forced to merge to minimize losses, and one may be forced shut down altogether. However, the questions remain if the fate of these vendors is only indicative of the demise of Hadoop powered solutions and other open source data platforms, or the death of big data as a whole? Was big data merely a fad or a passing interest of industries?


From Machine Learning to Machine Cognition  

Image 4 for From Machine Learning to Machine Cognition
Keeping logic/decisions outside network is what has been done by now. For decisions, we are using automated systems bases on software running on CPUs instead of artificial cognitive networks. While these work very well and will still be present for a long time, they are limited. Basically, these programs perform simple iterative tasks or move controls and numbers on monitor windows with millions of lines of code. This approach may be good while dealing with games and simple narrow tasks but not great when dealing with general concepts. They will not ensure enough internal connections. These will hardly evolve to intelligence. The complexity required is just too high to emulate imagination, intuition etc. Image recognition has been developed with neural networks because it was impossible to generate an iterative algorithm for it. The same should be done with cognition; decisions should use neurons, cognition should be kept inside network together with concepts and learning, as they have common neurons.


Open-Source Tool Lets Anyone Experiment With Cryptocurrency Blockchains

In researching blockchains, Shudo and his colleagues searched for a simulator that would help them experiment with and improve the technology. But existing simulators were too hard to use and lacked the features the team wanted. Moreover, these simulators had apparently been created for specific research and were abandoned soon after that work was completed, because many of the tools the group found were no longer being updated.  "The most recent simulator we looked at was developed in October 2016," says Shudo. "And it was no longer being maintained." So, the group developed its own simulator. Dubbed SimBlock, it runs on any personal computer supporting Java and enables users to easily change the behavior of blockchain nodes. Consequently, investigating the effects of changed node-behavior has now become a straightforward matter, says Shudo. "All the parameters of the nodes in SimBlock are written in Java," he explains. "These source files are separated from the main SimBlock Java source code, so the user simply edits [the nodes’] source code to change their behavior."


Visual Studio Code: Stepping on Visual Studio’s toes?
Microsoft describes Visual Studio as a full-featured development environment that accommodates complex workflows. Visual Studio integrates all kinds of tools in one environment, from designers, code analyzers, and debuggers to testing and deployment tools. Developers can use Visual Studio to build cloud, mobile, and desktop apps for Windows and MacOS.  Microsoft describes Visual Studio Code, on the other hand, as a streamlined code editor, with just the tools needed for a quick code-build-debug cycle. The cross-platform editor complements a developer’s existing tool chain, and is leveraged for web and cloud applications. But while Microsoft views the two tools as complementary, developers have been raising questions about redundancy for years. Responses to a query in Stack Overflow, made four years ago, sum up the differences this way: Visual Studio Code is “cross-platform,” “file oriented,” “extensible,” and “fast,” whereas Visual Studio is “full-featured,” “project and solution oriented,” “convenient,” and “not fast.”


Attacks against AI systems are a growing concern


The continuing game of “cat and mouse” between attackers and defenders will reach a whole new level when both sides are using AI, said Hypponen, and defenders will have to adapt quickly as soon as they see the first AI-enabled attacks emerging. But despite the claims of some security suppliers, Hypponen told Computer Weekly in a recent interview that no criminal groups appear to be using AI to conduct cyber attacks. The Sherpa study therefore focuses on how malicious actors can abuse AI, machine learning and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are already within attackers’ reach, including the creation of sophisticated disinformation and social engineering campaigns. Although the research found no definitive proof that malicious actors are currently using AI to power cyber attacks, as indicated by Hypponen, the researchers highlighted that adversaries are already attacking and manipulating existing AI systems used by search engines, social media companies, recommendation websites, and more.



Quote for the day:


"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Landry


Daily Tech Digest - July 11, 2019

How IoT is reshaping network design

IoT
In a world of always-on ubiquitous connectivity, latency and reliability loom over everything, whether you’re talking about self-driving cars or Industry 4.0. These two challenges are driving much of the change that we’ll see in network design over the next few years. If the industry is to realize the promised benefits of IoT, we must increase the ability to support more machine-to-machine communications in near-real time. In applications like autonomous vehicles, latency requirements are on the order of a couple of milliseconds. GSMA, the international association for mobile technologuy, has specified that 5G's latency should be 1 millisecond, which is 50 times better than 4G's current 50 milliseconds. Satisfying these requirements involves a radical rethink about how and where we deploy assets throughout the network. For example, routing and backing up data using a traditional star-type network design will become increasingly unfeasible. The vast amount of traffic and the latency demands would easily overwhelm a north-south data flow.



Cyber security will always be an issue, “until we get rid of passwords” — Frank Abagnale Jr

The password is insecure: a hacker could log into an individual’s bank account and they wouldn’t even know. This is first issue; passwords are easily lost and even more easily stolen, via phishing or malware attacks. Once a cybercriminal has access to the password, they can replay it over and over gain. “Unfortunately, because passwords are free and easy, no one gave design much thinking,” said Mr Eisen. “But, now the cost of passwords is obvious” — they’re the great security vulnerability and largely responsible for the data breaches that pepper news headlines. Historically, security and user experience have been at odds with each other, because everyone believed that making systems less user friendly (longer, more complex passwords, for example) made them more secure — this is a fallacy and hinders adoption rates, making systems, ironically, less secure. “This is not a computer-to-computer interaction with longer keys. These are humans we’re talking about,” continued Mr Eisen.


Logitech wireless USB dongles vulnerable to new hijacking flaws

Logitech USB dongle
The vulnerabilities allow attackers to sniff on keyboard traffic, but also inject keystrokes (even into dongles not connected to a wireless keyboard) and take over the computer to which a dongle has been connected. When encryption is used to protect the connection between the dongle and its paired device, the vulnerabilities also allow attackers to recover the encryption key. Furthermore, if the USB dongle uses a "key blacklist" to prevent the paired device from injecting keystrokes, the vulnerabilities allow the bypassing of this security protection system. Marcus Mengs, the researcher who discovered these vulnerabilities, said he notified Logitech about his findings, and the vendor plans to patch some of the reported issues, but not all. According to Mengs, the vulnerabilities impact all Logitech USB dongles that use the company's proprietary "Unifying" 2.4 GHz radio technology to communicate with wireless devices.



Financial Firms Face Threats from Employee Mobile Devices

Instead of malware, criminals are using phishing attacks to gain access to financial services networks, but not just any attacks. "We're seeing more targeted attacks within financial services instead of kind of the scattershot approach where you send out a phishing attack to everybody in the organization," he explains. The success of phishing attacks on mobile devices in financial services may be part of a larger pattern of risky mobile behavior by those in the industry. According to the report, 42% of the organizations represented had devices with "side-loaded" apps — apps downloaded and installed from sites other than the app stores approved for the device. Covington says, "You start to see the implications of letting employees manage their own device." And those employees are managing their devices in tremendous numbers, he says. Employee-owned devices, used to conduct company business, are targets because of the sensitive data they contain. "There's no doubt in my mind that the criminal side of the equation is after rich data," he says.


Digital skills — key to driving UK prosperity post-Brexit, according to Salesforce

Digital skills — key to driving UK prosperity, according to Salesforce image
The data from the Salesforce report highlights concerns of a potential shortage of tech skills post-Brexit, with over half of business leaders believing the UK is at risk of a tech brain drain. To address this, businesses are now recognising the pivotal role they must play in nurturing tech talent and digital skills in the country. One in four business leaders feel responsibility for doing so lies mainly with private enterprise; Over half (55%) plan to invest more in developing their own tech talent, with the same number pledging to address the skills gap by re-skilling older generations; and And, 51% intending to do more to re-skill people from disadvantaged backgrounds. There are issues that business needs to lead on regardless of what’s happening in the world of politics,” said Paul Smith, EVP and GM, Salesforce UK. “The economy is changing as new technologies emerge.


The Bank of Amazon: How big tech is disrupting banking


Big tech companies have already begun to embark on financial ventures, with payment platforms such as Google’s Google Wallet and Google payments, Amazon lending to SME marketplace sellers, Facebook’s partnership with Clear Bank on a product called Charged, a programme that allows financing for advertising, and Apple’s credit card, launched last year with Goldman Sachs and Marcus in the US. ... Big tech is in the position of having a significant “data advantage” over banks or fintechs, with the ability to glean more information about their users than others could hope to achieve. With the tech resources to offer an improved user experience and services that are integrated into their existing platforms, a grasp of artificial intelligence that traditional banks are only just beginning to deploy, sophisticated cloud computing, and an already loyal user base, up to 40% of the revenue currently generated by the US financial industry could move over to Big Tech, according to McKinsey.


Restoring Vision With Bionic Eyes: No Longer Science Fiction


"Brain-computer interfaces" can be used both for treating neurological and mental disorders as well as for understanding brain function, and now engineers have developed ways to manipulate these neural circuits with electrical currents, light, ultrasound, and magnetic fields. Remarkably, we can make a finger, arm, or even a leg move just by activating the right neurons in the motor cortex. Similarly, we can activate neurons in the visual cortex to make people see flashes of light. The former allows us to treat neurological conditions such as Parkinson's disease and epilepsy, whereas the latter should eventually allow us to restore vision to the blind. ... We have a real opportunity here to tap into the existing neural circuitry of the blind and augment their visual senses much like Google Glass or the Microsoft HoloLens. For example, make things appear brighter the closer they get, use computer vision to mark safe paths and combine it with GPS to give visual directions, warn users of impending dangers in their immediate surroundings, or even extend the range of "visible" light with the use of an infrared sensor. 


The Potential of AI for Utilities

Utility officials analyzing data
One of the biggest confusion factors is all the different terms that are used as synonyms for AI such as machine learning, deep learning, cognitive computing, etc. The list grows daily. Keep in mind, these terms are not interchangeable, but they are often used that way. That doesn’t help anyone trying to figure out AI or how to use it. First of all, AI is a division of computer science using complex instruction sets to perform what appears to be human-like intelligence. These programs are powered by algorithms, and that is the ingredient causing the mystique. Without going into a lot of detail, an algorithm is a set of step-by-step computer instructions that can use data to build models that make predictions based on the data. Remember, we are a long way off from the thinking, talking robots seen in movies and on television. Algorithms are how AI demonstrates being smart, but be aware it’s not intelligent, which is the critical distinction. This type of AI is referred to as Narrow AI or Applied AI. It is said to simulate human thought, but each application can only carry out one specific task with a limited range of functions.


RiskIQ uncovers new Magecart campaign


This attack introduces yet another method by Magecart that RiskIQ researchers call a “spray and pray” approach. Because skimmers work only when placed on payment or checkout pages, most Magecart attacks target specific e-commerce sites and attempt to drop a skimmer only on pages with payment forms. However, the ease of compromise that comes from finding S3 buckets misconfigured to allow public access means that even if only a fraction of their skimmer injections return payment data, it will yield a substantial return on investment, the researchers said. “This is a brand new twist on Magecart,” said Yonathan Klijnsma, head threat researcher at RiskIQ. “Although this group chose reach over targeting, they likely ended up getting their skimmer on enough payment pages to make their attack lucrative. They have done their cost-benefit analysis.” The scale of this latest attack illustrates how easy it is for threat actors of any kind to compromise a vast quantity of websites at once with scripts stored in misconfigured S3 buckets.


Stream Processing Anomaly Detection Using Yurita Framework

Working at PayPal on a next generation stream processing platform, we started to notice that many of our users wanted to use stream processing to apply anomaly detection models in real time. After we explored different architectures to create a flexible production grade framework that can scale to real world workloads, eventually we decided to go with a pipeline-based API, inspired by other open source projects like scikit-learn and Spark MLlib. This work has led to the development of Yurita - an open source anomaly detection framework for stream processing. Yurita is based on the new Spark structured streaming framework, and utilizes its processing engine capabilities to reach high scale and performant execution. The name Yurita comes from a traditional Japanese gold panning tool. ... Without knowing what the normal behavior of a metric is, we would be able to use only simple anomaly detection techniques, like rule-based decisions which also require a deep understanding of each specific dataset, and therefore are not scalable from a productivity point of view.



Quote for the day:


"All organizations are perfectly designed to get the results they are now getting. If we want different results, we must change the way we do things." -- Tom Northup


Daily Tech Digest - July 10, 2019

edge computing, Linux, Red Hat
One reason why edge computing defies easy definition is that it takes many different forms. As Jaromir Coufal, principal product manager at Red Hat, recently pointed out to me, there is no single edge. Instead, there are lots of edges – depending on what compute features are needed. He suggests that we can think of the edge as something of a continuum of capabilities with the problem being resolved determining where along that particular continuum any edge solution will rest. ... Done properly, edge computing can provide services that are both faster and more reliable. Applications running on the edge can be more resilient and run considerably faster because their required data resources are local. In addition, data can be processed or analyzed locally, often requiring only periodic transfer of results to central sites. While physical security might be lower at the edge, edge devices often implement security features that allow them to detect 1) manipulation of the device, 2) malicious software, and 3) a physical breach and wipe data.



Deep Learning for Computer Vision: A Beginners Guide

What distinguishes deep learning is that its networks contain many hidden layers. This extra complexity empowers machines to learn from unstructured, unlabeled data as well as labeled and categorized data. Note that none of these concepts are particularly new — rapid advances in computing power and technology enables the models to be fed with large volumes of data. The more data available, the more proficient the models become at learning tasks. Speech recognition, image recognition, natural language processing (NLP), and computer vision are some of the areas deep learning has improved dramatically. Many technology companies now specialize in providing platforms for training deep learning models in computer vision and other areas. Such companies have also facilitated further innovation in these artificial intelligence branches. ... The most exciting potential use for this computer vision function is real-time semantic segmentation used by self-driving cars. Identifying and localizing objects accurately can improve the safety and reliability of autonomous vehicles.


How ASEAN firms are turning data into critical assets


Rouam said SGX is now looking to grow that figure by providing new data services, such as data visualisation, to external customers, effectively extending the value of its data beyond internal use. “We have data teams that use data for internal purposes, such as analysing customer behaviour as well as meeting regulatory and operational requirements,” said Rouam. “To turn data into a critical asset, we did a lot of work on the technology and legal processes.” This includes building a centralised database with in-memory capabilities to ensure timely access to machine-readable data for data scientists. SGX has also built a logical layer to help business users understand the data, along with a data dictionary and business glossary. At SP Digital, the digital services subsidiary of Singapore utility provider SP Group, data has always been critical to its operations, enabling it to control and manage the country’s critical energy infrastructure. About three years ago, the company embarked on a digital transformation initiative, which, among other areas, includes building a data lake to house all its data, according to Chang Sau Sheong, CEO of SP Digital.



Applied AI in Software Development

When it comes to validating the property of an object present on the UI, TestComplete provides multiple options to access the object’s properties. For example, a button object can have properties like enabled or disabled, text, coordinates, Id, class, etc. Hence, it becomes easy to identify an object based on these properties and confirm the expected behavior. However, reading content on images or a graphical chart like interfaces which are becoming more common with the proliferation of business intelligence and data-driven dashboards is difficult to identify and validate or perform automated actions on them. However, this has changed with the introduction of the latest version of TestComplete viz. version 12.60 which overcomes this issue by making use of an API driven optical character recognition (OCR) service. There are certain prerequisites to use this option. First, one needs to enable the OCR plugin by installing the extension under File –> Install Extension.


Caitlin Long, Blockchain’s Ambassador of Hope

Caitlin Long, Blockchain’s Ambassador of Hope: Notes from a RegTech conference keynote
What became obvious is that humans, no matter what industry or background, are inherently lazy. When it comes to understanding what’s going on in Wyoming and the fast-tracked 13 bills that were passed (thanks to the leadership and vision of Caitlin Long from 2018 to 2019), what is realized is that humans are lazy and crave convenience. When it comes to ensuring that as emerging technologies arrive and society evolves—that our policies and laws are “backward compatible”—humans are lazy.  When it comes to educating policymakers, and vice versa, policymakers truly understanding the purpose and potential of new technology, but humans seek convenience. Complacency is not good enough anymore. Laziness is no longer an excuse. All of us have to step up to the plate, engage, work in partnership with policymakers and technologists to thoughtfully craft legislation that 1) keeps consumers, the “main street moms and pops,” safe, 2) keeps the nefarious “bad guys” at bay


Shocked by your cloud provider bills? Here’s what to do about it

One of the most common reasons businesses see their cloud costs spiral out of control is that they moved to the cloud without a plan or strategy. It’s easy to buy and consume cloud, so if you don’t have a strategy governing your company’s cloud use, it’s really easy to buy and consume more than you meant to. If you don’t have a policy for cloud use, put one together now. This will help you manage the number of cloud platforms selected as well as the costs by making sure that using the cloud is an active decision rather than something your organization does by default. ... Because of the cloud’s ability to scale more or less infinitely, the problem for users is a bit like what you’d get in the summer during a heatwave: you blast the A/C for a week because you need the relief, but you’re in for a shock when the bill comes – and what can you do? You already used all that sweet, cool air. With the cloud, you can head this particular problem off by setting limits ahead of time with your cloud provider.


The direction of business intelligence is changing to forward


Data analysis has traditionally been a relatively straightforward process: Input data, generate a report, and analyze the report to glean insight. But business intelligence is changing. Traditional data analysis is looking backward. It's attempting to figure out why something happened. It's not revealing what will happen. The look of the reports has changed significantly from dot-matrix spreadsheets to eye-popping computer graphics, thanks to Tableau, Microsoft Power BI and other data visualization platforms. But they're still reactive. That won't be good enough for the next generation of BI platforms. "In the next three to five years, instead of asking questions of data, the data will start suggesting observations," said Tim Crawford, CIO strategic adviser at AVOA, a consulting firm in Rolling Hills Estates, Calif. "You'll see things that you hadn't even thought to ask in the first place. As AI comes into play, you can expect to see tools that will identify and highlight things you hadn't thought to ask in the first place.


The Importance of QA Testing for Software Development

If you only rely on internal testing by the same people who developed the software, then they may praise their own work and be reluctant to make changes. Having testers come from different backgrounds and cultures adds diversity to the testing. This is particularly important if you plan on launching your software, service or product worldwide. This is also a reason why companies should do layered QA testing at different stages of design or development. Early testing helps prevent costly mistakes and wasteful development for features that users will not want or care about. As the product develops, further testing and documentation help guide the process in the right direction: one that will satisfy market needs and consumers.  Therefore, QA testing is not just done to eliminate bugs in the end, but to make sure the correct procedures are in place. Rather than finding defects, it deals with preventing them throughout the development process.


Will IBM’s acquisition be the end of Red Hat?

Will IBM’s acquisition be the end of Red Hat?
The good news is that this merger of IBM and Red Hat appears to offer each of the companies some significant benefits. IBM makes a strong move into cloud computing, and Red Hat gains a broader international footing. The other good news relates to the pace at which this acquisition occurred. Initially announced on October 28, 2018, it is now more than eight months later. It’s clear that the leadership of each company has not rushed headlong into this new relationship. Both parties to the acquisition appear to be moving ahead with trust and optimism. IBM promises to ensure Red Hat's independence and will allow it to continue to be "Red Hat" both in name and business activity. ... Will this acquisition be the end of Red Hat? That outcome is not impossible, but it seems extremely unlikely. For one thing, both companies stand to gain significantly from the other’s strong points. IBM is likely to be revitalized in ways that allow it to be more successful, and Red Hat is starting from a very strong position. While it’s a huge gamble by some measurements, I think most of us Linux enthusiasts are cautiously optimistic at worst.


Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines

The primary advantage of analyzing videos over mere text analysis, for detecting emotions and sentiment, is the surplus of behavioral cues. Videos provide multimodal data in terms of vocal and visual modalities. The vocal modulations and facial expressions in the visual data, along with text data, provide important cues to better identify true affective states of the opinion holder. Thus, a combination of text and video data helps to create a better emotion and sentiment analysis model. Recently, a number of approaches to multimodal sentiment analysis producing interesting results have been proposed.11,13 However, there are major issues that remain mostly unaddressed in this field, such as the consideration of the context in classification, effect of speaker-inclusive and speaker-exclusive scenario, the impact of each modality across datasets, and generalization ability of a multimodal sentiment classifier. Not tackling these issues has presented difficulties in the effective comparison of different multimodal sentiment analysis methods.



Quote for the day:


"When you accept a leadership role, you take on extra responsibility for your actions toward others." -- Kelley Armstrong


Daily Tech Digest - July 09, 2019

Colocation facilities buck the cloud-data-center trend

CSO > cloud computing / backups / data center / server racks / data transfer
Poole said the average capital expenditure for a stand-alone enterprise data center that is not a part of the corporate campus is $9 million. Companies are increasingly realizing that it makes sense to buy the racks of hardware but place it in someone else’s secure facility that handles the power and cooling. “It’s the same argument for doing cloud computing but at the physical-infrastructure level,” he said. Mike Satter, vice president for OceanTech, a data-center-decommissioning service provider, says enterprises should absolutely outsource data-center construction or go the colo route. Just as there are contractors who specialize in building houses, there are experts who specialize in data-center design, he said. He added that with many data-center closures there is subsequent consolidation. “For every decommissioning we do, that same company is adding to another environment somewhere else. With the new hardware out there now, the servers can do the same work in 20 racks as they did in 80 racks five years ago. That means a reduced footprint and energy cost,” he said.


The Phantom Menace in Unit Testing

This is not a rant about unit testing; unit tests are critically important elements of a robust and healthy software implementation. Instead, it is a cautionary tale about a small class of unit tests that may deceive you by seeming to provide test coverage but failing to do so. I call this class of unit tests phantom tests because they return what are, in fact, correct results but not necessarily because the system-under-test (SUT) is doing the right thing or, indeed, doing anything. In these cases, the SUT “naturally” returns the expected value, so doing (a) the correct thing, (b) something unrelated, or even (c) nothing, would still yield a passing test. If the SUT is doing (b) or (c), then it follows that the test is adding no value. Moreover, I submit that the presence of such tests is often deleterious, making you worse off than not having them because you think you have coverage when you do not. When you then go to make a change to the SUT supposedly covered by that test, and the test still passes, you might blissfully conclude that your change did not introduce any bugs to the code, so you go on your merry way to your next task. 


British Airways facing £183m GDPR fine


The ICO said its investigation found that a variety of information was compromised by poor security arrangements at the company, including log in, payment card, and travel booking details as well name and address information. Information Commissioner Elizabeth Denham said: “People’s personal data is just that – personal. When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.” The ICO said BA has cooperated with the investigation and has made improvements to its security arrangements since the breach came to light. The company now has 28 days to make representations to the ICO about the findings of its investigation and the proposed fine. Willie Walsh, chief executive of BA owners International Airlines Group, has confirmed that the airline will make representations to the ICO, according to Reuters. “We intend to take all appropriate steps to defend the airline’s position vigorously, including making any necessary appeals,” he said.



How artificial intelligence can transform the legal sector


Firms are experimenting with the use of chatbots technology to deliver basic legal advice. DoNotPay has already garnered a lot of attention for allowing users to appeal parking fines and it’s not unreasonable to expect that as the technology becomes more sophisticated, higher quality and more specific advice could be offered by a similar machine learning (ML) tool. As with all new technologies, the ultimate aim is to ensure that firms offer a higher quality and more consistent service. It is important to note that AI is not positioned to outperform the high-end tasks performed by legal professionals, but should rather be seen as a tool designed to support them by carrying out time-consuming research or administrative tasks. That’s why now is the time for forward-thinking firms to begin integrating AI into their legal services as well as their administrative procedures. The use of AI does however give rise to a few practical and ethical considerations that legal teams must be aware of. Many revolve around the sensitive data that firms would be required to store on clients in order to offer an optimal service.


Think like a criminal to beat them at their own game ⁠— Frank Abagnale Jr

“Think like a criminal to beat them at their own game” ⁠— Frank Abagnale Jr image
Crime today, of course, has a significant physical element. However, over the last 20 years there has been a criminal movement towards the digital. Cybersecurity Ventures predicts cybercrime damages will cost the world $6 trillion annually by 2021, up from $3 trillion in 2015. The attack surface area is now different, but “the one thing that never changes is that criminals are all the same,” said Mr Abagnale. “So, if you think like a criminal, it doesn’t matter what they do, you can figure out there motives and means.” Over the course of a 43-year-career in the FBI, Mr Abagnale has worked on every single data breach, including; TJX in 2007, and more recently, the Marriott and Facebook breaches. “The one thing that I’ve learnt is that every breach occurs because somebody in that company did something that they weren’t supposed to do, or somebody in that company failed to do something they were supposed to do,” he said. “It always comes down to the human element,” reiterated Mr Abagnale.


IAM market evolves, but at a cost

"We're in this spot with a lot of technical debt," Daum said, adding that State Auto is a G Suite customer and is in the cloud with AWS, but is hesitant to add on another vendor just for identity management. "We're paying a lot of money to a lot of different companies and we're trying to find a way to see which of those companies can be used for identity services. No offense to Ping Identity or Okta, but why pay them however much money if we can limit the amount of cooks in the kitchen." Emerging capabilities within IAM products intrigued Daum, but never bested ROI. "Where's the value added?" Daum said. "Everyone is talking about cloud and password-less and zero trust. Those buzzwords sound nice, but the cost to implement is still huge." Zero trust is a security architecture introduced by Forrester Research that is designed to assess threats not just from outside the network, but from within it. It uses the principle "never trust, always verify" anything trying to connect to the network to ensure it remains secure.


Don’t wait up for the open cloud

Don’t wait up for the open cloud
Open clouds have been a concept since cloud computing became a thing; the reality is that we’re dealing with public companies that have to return an investment to shareholders. They operate based on gaining profitable advantages and working within their own market microcosms. They court users in their own way, pushing their own cloud services, which leads to having workloads that are not easily transported from cloud to cloud. Indeed, if the objective is “cloud native,” by definition that's going to mean lock-in. A few open cloud standards have been pushed in the past, and currently as well. Although they found traction as private clouds, with some public cloud instances as well, private clouds have declined relative to public clouds, and the public cloud instances shut down. It’s just too hard to keep up with the larger public cloud players and their billion-dollar R&D and marketing budgets. This leads me to a few conclusions about the state of cloud computing now, as well as some projections of where things are likely to go: The notion of interoperable public clouds is not likely to happen unless the user bases demand it and the public cloud providers feel the pinch.


Do and don'ts of navigating data analytics in the cloud

The marketing hype on the cloud positioning it as being an “easy button” can draw you in, but the reality is moving an enterprise data warehouse or another type of analytical environment to the cloud is just like moving one database platform to another – and it comes with the same challenges. You and your team need to be ready to migrate, monitor and test the new environment, and when you are migrating systems that have developed over time, “lifting-and-shifting” does not come without running into technology issues or making functional decisions that impact how a business or application is run. It’s true that with the cloud, you never have to complete low-level administration of your environment such as software updates and server sizing. However, higher-level administration such as database performance, usage analysis, cost management, and security and privacy management will always be a requirement. 


Must-have features in a modern network security architecture

Modern network security must have these features
As the old security adage goes, “the network doesn’t lie.” Since all cyber attacks use network communications as part of their kill chain, security analysts must have access to end-to-end network traffic analysis (NTA) up and down all layers of the OSI stack. The best NTA tools will supplement basic traffic monitoring with detection rules, heuristics, scripting languages, and machine learning that can help analysts detect unknown threats and map malicious activities into the MITRE ATT&CK framework. ... Network security technologies must support granular policies and rules, subject to immediate alteration based upon changes in things such as user location, network configuration, or newly discovered threats/vulnerabilities. Organizations must have the ability to spin up/spin down or change network security services whenever and wherever they are needed. Modern network security controls must be able to accommodate internet of things (IoT) devices and protocols with the same types of strong policies and enforcement as they offer for standard operating systems. Finally, network security architectures must be built around easily accessed APIs for rapid integration.


A Simplified Value Stream Map for Uncovering Waste

There are a number of ways to display waste in a system. The most common approach is probably the use of value stream maps. These are maps that show the journey of a product from raw material to finished goods delivered to customers. They are very helpful in understanding the flow of goods and pinpointing wasteful delays. These don’t always seem relevant to software engineering because the images of factories, trucks, and forklifts don’t apply. Even the versions developed specifically for software sometimes seem to lack the qualities of being simple and definitive. What if we just want to know one thing: for any given process, how much time is spent waiting versus working? This would give us a simplified view of waste for any process and would be helpful in making it more efficient. The details for constructing this are straightforward. Let’s define working as time spent actively creating a product, time for which customers would gladly pay. Let’s define waiting as time spent waiting on something, time for which customers would not want to pay. We use duration (not effort) for both and we maintain consistent time units between them.



Quote for the day:


"The one nearest to the enemy is the real leader." -- Ugandan Proverb


Daily Tech Digest - July 08, 2019

An eco-friendly internet of disposable things is coming

An eco-friendly internet of disposable things is coming
The “internet of disposable things is a new paradigm for the rapid evolution of wireless sensor networks,” says Seokheun Choi, an associate professor at Binghamton University, in an article on the school’s website. “Current IoDTs are mostly powered by expensive and environmentally hazardous batteries,” he says. Those costs can be significant in any kind of large-scale deployment, he says. And furthermore, with exponential growth, the environmental concerns would escalate rapidly. The miniaturized battery that Choi’s team has come up with is uniquely charged through power created by bacteria. It doesn’t have metals and acids in it. And it’s designed specifically to provide energy to sensors and radios in single-use IoT devices. Those could be the kinds of sensors ideal for supply-chain logistics where the container is ultimately going to end up in a landfill, creating a hazard. Another use case is real-time analysis of packaged food, with sensors monitoring temperature and location, preventing spoilage and providing safer food handling.


How to Get Hands-On with Machine Learning

Image: Peshkova - stock.adobe.com
If you really want to understand the capabilities and limitations of machine learning, you have to get hands-on. Here's a short list of options for beginners. Everyone should have a conceptual understanding of machine learning, so they can communicate more effectively with practitioners. To really understand what machine learning can and can't do, you have to get hands-on with it, which is what the curious, the career builders, and the DIY problem-solvers are doing. The starting point differs for individuals based on their education and experience. However, the titles of resources may not necessarily reflect that fact. Following is a short list of resources with a bit of insight into their requirements and value. Deep learning, a subcategory of machine learning, has been omitted intentionally to keep the focus of this article on machine learning in general. Competitions provide an opportunity for anyone to get hands-on with machine learning. Don't let the word "competition" scare you, because you'll find a lot of helpful resources at these sites available free to anyone. Later, if you decide to compete, and if you achieve a prominent position on the leader board, you'll have something more to add to your resume.


What is data science? A method for turning data into value
The business value of data science depends on organizational needs. Data science could help an organization build tools to predict hardware failures, allowing the organization to perform maintenance and prevent unplanned downtime. It could help predict what to put on supermarket shelves, or how popular a product will be based on its attributes. "The biggest value a data science team can have is when they are embedded with business teams. Almost by definition, a novelty-seeking person, someone who really innovates, is going to find value or leakage of value that is not what people otherwise expected," says Ted Dunning, chief application architect at MapR Technologies. "Often they'll surprise the people in the business. The value wasn't where people thought it was at first." ... Data science is generally a team discipline. Data scientists are the forward-looking core of most data science teams, but moving from data to analysis, and then transforming that analysis into production value requires a range of skills and roles. For example, data analysts should be on board to investigate the data before presenting it to the team and to maintain data models.


Network Security and Performance Monitoring: The Basics

Network Security and Performance Monitoring: The Basics
A security attack might be design to either strain or eliminate a network’s resources. Once malware enters the network, it may continuously send requests for data to deliberately use up your available bandwidth. The severity of an attack like this could range from slowing the network down to a full-scale denial-of-service (DoS) attack. Whatever the intent, excessive or harmful data usage will put a huge strain on your network’s performance. Without the right security resources in place, your network will have to work hard to keep up. Another way security threats can overload your network is by installing resource-draining applications and leaving them to use up bandwidth. A network performance monitoring (NPM) solution can sweep for any unwanted software and alert your team to it so you can take steps to remove it from your infrastructure. Security threats can target any hardware on your network’s infrastructure. Malware might try to bring down either devices connected to the network or the network nodes that you have installed.


What is a botnet? When armies of infected IoT devices attack


A botnet is a collection of internet-connected devices that an attacker has compromised. Botnets act as a force multiplier for individual attackers, cyber-criminal groups and nation-states looking to disrupt or break into their targets’ systems. Commonly used in distributed denial of service (DDoS) attacks, botnets can also take advantage of their collective computing power to send large volumes of spam, steal credentials at scale, or spy on people and organizations. Malicious actors build botnets by infecting connected devices with malware and then managing them using a command and control server. Once an attacker has compromised a device on a specific network, all the vulnerable devices on that network are at risk of being infected. A botnet attack can be devastating. In 2016, the Mirai botnet shut down a large portion of the internet, including Twitter, Netflix, CNN and other major sites, as well as major Russian banks and the entire country of Liberia.


Blockchain and the sharing economy


Whether blockchain will ever play a meaningful role in the sharing economy is up for debate. Some skeptics say it's all hype and blockchain's role could be minimal at best. But proponents say there is a natural fit between blockchain and the sharing economy services. The Blockchain Council, a group of blockchain experts and enthusiasts who support research and development of the technology, has noted that businesses such as Uber and Airbnb depend on their users to bring value to their networks. "The problem with this model is that the revenue generated is not fairly shared with all of the members that help generate content," the council said. Blockchains, because of their decentralized nature, allow for smart contracts that can deploy software in a secure and decentralized manner, it said. "Therefore, with Blockchain implemented software, we do not need to rely on massive data centers to run the enormous profit-making platforms," the council said.


Meeting the Challenge of Artificial Intelligence

Meeting the Challenge of Artificial Intelligence
While AI is still an evolving technology, many applications have recently made impressive leaps. For example, computers can defeat chess champions, help drive cars, instruct drones to return automatically, provide medical diagnoses, perform as virtual assistants, and navigate vacuum cleaners through a furnished house. The AIapplications for business involve training computers to do tasks employees can perform, learning from experiences and adjusting to new data, if needed. Currently, CPA firms can use intelligent robots to count inventories, inspect fixed assets, handle bank audit confirmations, and read contracts or other documents to generate meaningful insights. Some CPAs may assume that, like big data or blockchain, AI is a relatively recent development. To the contrary, AI research started even before the creation of the Accounting Principles Board, FASB’s predecessor, in 1959. Subsequently, several subfields of AI have emerged, including robotics, perception (vision and speech), machine learning (ML), and expert systems (ES).


Wipe Away the Threat of Wiper Attacks

Wipe Away the Threat of Wiper Attacks
As with ransomware, resisting wiper malware requires putting defenses in place before attackers come calling. "It is time for strong authentication, least privilege access control - or at least 'read-only' or 'execute-only' - and end-to-end application layer encryption," Murray says. But that's just the start. "You can add privileged access management - PAM - and safe backup with fast recovery to those three measures," Murray tells me. Don't stop there. "We need greatly improved proactive threat detection," Murray says. "We need out-of-band confirmations and alerts for all transactions, many data changes, and some users. We need document management systems for intellectual property." "In addition to implementing multi-factor authentication, make sure that legacy protocols that don't support MFA are either disabled or tightly restricted," Lee Neely, a veteran security professional at Lawrence Livermore National Laboratory, says in a recent SANS Newsletter. "Additionally awareness reminders, including spam/phishing reporting processes, would be timely." Unfortunately, many organizations don't have many of these essential defenses in place.



The role of blockchain in information governance


Today, businesses are finding success with a federated content approach using content services. They are building applications that call content services without consumers' knowledge of which underlying system the content resides. Within an organization, it is possible to access the content across different content services platforms (CSPs) using a single set of content services. However, content exists outside the control of a business. For instance, government embassies, immigration agencies and law enforcement organizations act upon visa applications at different stages of a visa process. Each organization has its own system for tracking visa applications, and each needs access to the latest information at all times. Similar multi-organization interactions occur in healthcare as patients move between different providers. Each provider adds content to the patient's record creating a complete medical history. The need for instant access to the latest content can have a life-or-death impact.


Agile Planning: Always Plan for a Product, Not a Project!

Planning checklist
Agile estimation is done by evaluating the amount, complexity, risk, duration, and business value. There are many agile estimation techniques that are in practice, which include T-shirt sizing, Planning Poker, The Bucket System, Fist to Five, Dot voting, Affinity mapping, etc. Velocity is the amount of work done by the team in a given time, and in Agile we sum up the story points that were completed in that sprint to determine the velocity. This is a kind of measure of the productivity of the team; it depends on various factors, but it best works with stable and experienced teams. So, while working with new teams, the product owners and scrum masters should be liberal and let the team stabilize. Relative Estimation is a kind of estimation not by units of time but how items are like each other in terms of complexity. Instead of estimating each user story separately, we estimate by comparing or grouping items of similar difficulty. For example, feature B might be "twice as complex" as feature A, which you have already completed. So, if feature A took two weeks, we can guess that feature B would take four weeks to complete.



Quote for the day:


"A true leader always keeps an element of surprise up his sleeve which others cannot grasp but which keeps his public excited and breathless." -- Charles de Gaulle


Daily Tech Digest - July 07, 2019

Anti-fraud analytics must be about prevention, not detection

The real challenge for businesses, he said, is not just building the biggest analytics platform possible – but embedding those analytics into operational processes, helping prevent fraud and cybercrime rather than detecting it after the fact. It’s a big difference that can make all the difference for financial-services institutions that have been pummelled by data breaches and struggled to maintain consumer confidence in their fraud and data-privacy protections. A recent Unisys survey found that Australians are by far the least trusting of their banks’ data protections – a perception that is hardly helped by incidents such as the recent exploitation of the Westpac PayID payment service. Indeed, true to expectations, the increasing pace of financial-services transactions – for example, through Australia’s New Payments Platform (NPP), on which PayID was built – had had a flow-on effect in terms of the fraud it facilitates. This trend towards real-time transactions, Henderson said, has ratcheted up the urgency for every company to understand the vulnerabilities in their payments processes and intelligently apply targeted machine learning-driven analytics to prevent fraud – not just detect it after the fact.


New Wearable Voice Recognition Sensor Cuts out Ambient Noise Interference

Shure SM58 Microphone Stage Close-Up Voice Recognition Wearable Sensors Cancel Noise Science Audio Research Sound Engineering
A pair of researchers from Pohang University of Science & Technology came up with a technology that’s superior to current voice recognition options, and it could lead to more accuracy even when people use voice recognition in potentially noisy areas, like train stations or shopping malls. If a person puts their hand against their throat while speaking, it’s easy to feel the vibrations associated with the voice. The researchers took that into account while developing their voice recognition sensor. It’s a wearable device that recognizes a person’s voice according to how their neck skin vibrates. That approach means things like ambient noise or the volume of a person’s speech do not risk making it harder to decipher. The scientists determined sound pressure is proportional to the acceleration of the neck skin’s vibration at certain sound levels, and that they could use that knowledge to create a sensor that qualitatively measured the voice. They made a device comprised of a slim polymer film, plus a diaphragm featuring tiny holes.



It was a really bad month for the internet

internet heartbeat
On June 24, Cloudflare dropped 15% of its global traffic during an hours-long outage because of a network route leak. The networking giant quickly blamed Verizon (TechCrunch’s parent company) for the fustercluck. Because of inherent flaws in the border gateway protocol — which manages how internet traffic is routed on the internet — Verizon effectively routed an “entire freeway down a neighborhood street,” said Cloudflare in its post-mortem blog post. “This should never have happened because Verizon should never have forwarded those routes to the rest of the Internet.” Amazon, Linode and other major companies reliant on Cloudflare’s infrastructure also ground to a halt. A week later, on July 2, Cloudflare was hit by a second outage — this time caused by an internal code push that went badly. In a blog post, Cloudflare’s chief technology officer John Graham-Cumming blamed the half-hour outage on a rogue bit of “regex” code in its web firewall, designed to prevent its customer sites from getting hit by JavaScript-based attacks.


How Is AI Driving Software Testing?

AI is also capable of test optimisation itself. This can help find which tests are most efficient and accurate, and reduces the production of redundant test cases. Software testing can be an expensive process, so ensuring the right tests are carried out is a key element, which can be speeded up with the help of artificial intelligence. Tests for the impact of changes on business via customers is also a key way that AI is being utilised. This will help identify any issues caused to the customer by updates and new releases, and allow quick changes before customers are driven away from the software or awaiting customer feedback. It can even be predicted ahead of a release as to whether customer satisfaction will go up or down with a new release of the software. This gives you the opportunity to fine tune software ahead of a new release, to ensure customers are retained. This may be how AI is shaping software testing right now, but there is plenty more to come. More complex methodology which tests interconnected tech, like the Internet of Things devices, will be transformed by AI, that will also factor in whether the end user believes that the result is correct.


Encryption laws are creating an exodus of data from Australia: Vault


"As multinational companies move physical, operational, and legal jurisdiction offshore, they easily side step the AA Act -- in effect thwarting the AA Act," Vault said. "Current legislation does not prevent these companies continuing to provide services to Australia citizens, companies or government. In effect, these companies are eluding the law and attaining revenue while every day Australian citizens are suffering the consequences." A submission by the Australian Civil Society Coalition -- consisting of Digital Rights Watch, Blueprint for Free Speech, Human Rights Law Centre, NSW Council for Civil Liberties, Queensland Council for Civil Liberties, Liberty Victoria, Access Now, Electronic Frontiers Australia, and Future Wise -- reiterated prior calls for the laws to be entirely repealed. The coalition called for an "enforceable federal human rights framework" to prevent Australia being the weakest link in the Five Eyes network, as well as for protection for whistleblowers in relation to the encryption laws, and the use of warrants and judicial content for notices issued.


Size Does Matter: Tackling SMB Cybersecurity Concerns


One of the main issues is businesses’ perception of cybersecurity. Many view the whole premise as a business drain: a tick-box exercise or time and money spent on something that’s difficult to show ROI. Cybersecurity was cited as just one consideration as businesses become more established and grow. SMB leaders say that their biggest priorities are attracting new customers (36%) and increasing business growth and profitability (29%). In fact, only 35% perceive cybersecurity as a significant threat. However, these cyber0threats prove to take up a significant amount of SMBs’ time: where leaders admit they spend almost a day per week (or 18% of their time) on cybersecurity-related tasks. Ultimately, customer relationships and contracts are on the line, but few businesses focus on effective cybersecurity education for employees. Rather than detracting from growth, cybersecurity investment can be viewed as a facilitator and differentiator for SMBs over the long term.


New digital banks challenge HSBC’s Hong Kong dominance


“We’re not ignoring the fact that we’ve got eight new competitors,” said Mr Martin, who noted the high calibre of the investors in the new virtual banks. “But our reaction is, we know what we need to do and we’re doing it already.” Some of HSBC’s established rivals have decided to launch their own digital lenders: StanChart has teamed up with Hong Kong Telecom and online travel agent Ctrip; Bank of China is working with the Jardine conglomerate and an offshoot of China’s JD.com ecommerce group. StanChart also generates a significant chunk of its profits from Hong Kong retail banking — making $740m of pre-tax profit in the business last year, or 30 per cent of the total for the bank. However, its customer base is dominated by older wealthy customers, meaning the virtual bank could pick up new customers without cannibalising too much of its existing business. Samir Subberwal, head of retail banking in China and Asia for StanChart, pointed out that its partners Ctrip and Hong Kong Telecom have about 5m customers to whom it can market the new venture.


Threat vs. Challenge

Think about it: threats are inherently a bad thing. They cause harm, provide little room for growth or learning, and rarely provide any opportunity or feedback beyond surviving through the ordeal. None of that helps you feel confident about preparing or allows you to look forward to the event. Instead, it fills you with a sense of dread, apprehension, and anxiety - none of which are facilitative emotions. So, what do we do about it and how can we get back to maximizing our impact? It comes down to the Performance Mindset skills of reframing and perspective. When you start to dive deeper, you realize the problem isn't the importance of the event; the problem is how you view the event. By reframing the way you view your upcoming event, you can start to see it as a challenge instead of a threat. What’s the difference between a threat and a challenge? Everything. Where a threat is inherently bad, challenges are usually viewed as good or fun. Where threats provide little room for growth or learning, challenges, by nature, drive growth.


The Important Difference Between Virtual Reality And Mixed Reality

The Important Difference Between Virtual Reality And Mixed Reality
Mixed reality is the latest immersive technology and as a result, there aren’t as many publicised use cases of it compared to virtual reality. However, in its latest iteration, HoloLens 2 is an untethered device that Microsoft hopes will have many business applications to help people across an organisation communicate, collaborate and learn together. Ford is using mixed reality technology for business purposes. Is uses the HoloLens to prototype vehicles in a virtual environment to skip over making prototypes in a physical environment which is the conventional production method.   Mixed reality takes a lot more processing power than either virtual or augmented reality and it relies on an MR headset that offers either a holographic experience through translucent glasses or an immersive experience. While less immersive than a virtual reality experience, mixed reality pulls from virtual and augmented realities to join virtual and real worlds to create an extremely believable and effective interaction. Users can interact with the objects thanks to either gesture/gaze/voice recognition technology through a headset or with a pair of motion controllers.



The Biggest Cybersecurity Crises Of  2019 So Far

One of the most concerning corporate data breaches so far this year is that of the American Medical Collection Agency, a massive healthcare-related debt collector. The company discovered that it had been breached in March, and filings with the US Securities and Exchange Commission indicate that the intrusion on AMCA's systems lasted from August 2018 through March 2019. The incident was first publicly reported at the beginning of June after the medical testing firm LabCorp said that 7.7 million of its customers had data exposed because of AMCA, and Quest Diagnostics said it had had records from 12 million patients exposed. AMCA said that the compromised information included first and last names, dates of birth, phone numbers, addresses, dates of medical services, healthcare providers, and data on balances due. The stolen information did not include insurance ID numbers or Social Security numbers. Because AMCA contracted with so many companies, it's possible that additional organizations—and therefore other patients—were affected as well.



Quote for the day:


"It is one thing to rouse the passion of a people, and quite another to lead them." -- Ron Suskind