Daily Tech Digest - April 01, 2019

hack hacker cyber thief theft stolen
Instead of using wipers, Symantec reports that the group’s recent attacks are aimed at data exfiltration using vulnerabilities in a common piece of software. “The main point of entry in recent attacks has been spear-phishing emails capable of delivering malware to the recipient’s computer,” says Dick O’Brien, researcher at Symantec's Security Response. “The group has also attempted to exploit the recently patched WinRAR vulnerability attacks.” After sending phishing emails to targeted companies, the victim is encouraged to download a file, JobDetails.rar, which then tries to exploit vulnerability CVE-2018-20250 in WinRAR. A successful infection on an unpatched system allows an attacker to install any file on the computer. ... “Based on its tactics and targets, our assessment is that Elfin is a state-sponsored espionage group,” says O’Brien. “Given the nature of the group and its targets, we can only speculate that the information in question is likely to be of a strategic or economic interest to Elfin’s sponsors.”


identifier state machine
Lexing is the process of breaking an input stream of characters into "tokens" - strings of characters that have a "symbol" associated with them. The symbol indicates what type of string it is. For example, the string "124.35" might be reported as the symbol "NUMBER" whereas the string "foo" might be reported as the symbol "IDENTIFIER". Parsers typically use lexers underneath, and then compose the incoming symbols into a syntax tree. Because lexers are called in core parsing code, the lexing operations must be reasonably efficient. The .NET regular expression isn't really suitable here, and while it can work, it actually increases code complexity while diminishing performance. Included in this project is a file called "FA.cs" that contains the core code for a regular expression engine using finite automata which resolves to the humble yet ubiquitous state machine. Finite state machines are composed of graphs of states. Each state can reference another state using either an "input transition" or an "epsilon transition".


AI and data security: a help or a hindrance?

AI and data security: a help or a hindrance? image
Having the right technology in place is vital but companies need the right people to ensure it runs effectively. In a great deal of cases we’re seeing a shift in companies bringing senior security talent in-house rather than relying on external partners to bolster their security infrastructure. But organisations still have a long way to go when it comes to building security expertise from within. More than half, 52%, of respondents in a recent poll by Infosecurity Europe cited that they have a skill shortage in their organisation when it comes to preventing cyber attacks. Without the right team and technology, cyber attacks will only grow in severity. Neither can work effectively in isolation and those organisations that don’t invest in both will find out that the impact of a data breach goes far beyond fines. Businesses know that there is a high risk of cyber attacks and the majority are trying to build the right team and implement technology to tackle cyber security. But very few leaders truly understand where all data leak vulnerabilities exist and how to prevent them.


Creating HTML Layouts That Meet Web Accessibility Standards

Example of HTML Elements and ARIA Landmarks in a Page Layout.
Use ARIA landmarks across the web pages where appropriate. ARIA(Accessible Rich Internet Applications) is a comprehensive technical specification for adding accessibility information to elements that are not natively accessible (in particular, the ones developed with JavaScript, AJAX, and DHTML). With ARIA landmarks, a developer can extend HTML capabilities and apply proper semantics, i.e. properties, to UI and content elements for assistive technologies to understand these. Here is an example of how HTML semantic elements (<header>, <nav>, <main>, <footer>) are combined with ARIA role attributes (“banner”, “navigation”, “main”, “contentinfo”) to make website navigation using a screen reader easier for people with disabilities. Though most ARIA functions were recently implemented in HTML5 (and you should definitely favor these!), not all screen readers and browsers (e.g. IE) are sophisticated enough to depend on HTML semantics only.


Undertake software dependency management to reduce conflicts 


Since dependencies can take numerous forms, it's easy to end up with too many. When software depends on many packages or components, the application might have significant compatibility problems and can be plagued by long downloads, plus require lots of storage space. Similar problems occur with long dependency chains, where components depend on other components, and so on. Dependencies can conflict when multiple applications rely on different, incompatible versions of the same dependency. For example, if an application depends on component A.1 and another application depends on component A.2 but apps cannot install A.1 and A.2 together, a conflict occurs, and many conflicts are more convoluted than this example. In such circumstances, both apps cannot run on the same system at the same time, or the application with the older dependency might need an update to use the current dependency. Circular dependencies can affect software applications or constituent components.


Advancing OpenCL™ for FPGAs

Image 1 for Advancing OpenCLâ„¢ for FPGAs
Intel has created Intel® FPGA SDK for OpenCL™ technology, which provides an alternative to HDL programming. The technology and related tools belong to a class of techniques called high-level synthesis (HLS) that enable designs to be expressed with higher levels of abstraction. Intel FPGA SDK for OpenCL technology is now in widespread use. Amazingly for longtime FPGA application developers, the performance achieved is often close to―or even better than― HDL code. But it also seems apparent that achieving this performance is often limited to developers who already know how the C-to-hardware translation works, and who have an in-house toolkit of optimization methods. At Boston University, we’ve worked on enumerating, characterizing, and systematizing one such optimization toolkit. There are already a number of best practices for FPGA OpenCL documents. This work augments them, largely by applying additional methods well known to the high-performance computing (HPC) community2. In this methodology, we believe we’re on the right track.


Image: Production Perig - Adobe Stock
We are spending a lot of R&D time and effort figuring out what does that look like in our world. In our human resources products, we call it augmented intelligence. You can look at the data; you can discern certain things that are going on with your workforce such as diversity. Where you can get into augmented intelligence in a human capital management environment, you can literally train the product to tell you things about the workforce doing ongoing analytics. “With Intacct, we’ve talked a lot about artificial intelligence. When dealing with the close [for bookkeeping] especially for publicly traded companies, what if you could—over time through artificial intelligence—just always have an ongoing close? So it was never a monolithic event? Transactions were always updated. You had triggers that showed you potentially fraudulent transactions. You’re cleaning up your books as you go along. There is no notion of a period end close. You’re always closing. You could teach an AI engine how to do a continuous financial close. Those are the kinds of things we are trying to bring to bear within our products.


Artificial Intelligence is Really the Future? Let's Explore
The fresh recognition given to the pioneers of artificial intelligence, computer scientists Yoshua Bengio, Geoffrey Hinton and Yann LeCun with the Turing Award, an honour that is better known as technology industry’s version of the Nobel Prize has established that the world is acknowledging the relevance of emerging tech. AI has become a part of DNA for tech giants like Google. To maintain the sanctity of this technology and address the concerns around the ethics revolving around the growth of artificial intelligence, the company has created an Advanced Technology External Advisory Council to keep AI in check and shape the "responsible development and use" of AI in its products. Apart from being the fastest growing technologies in science, AI has taken the crown for being the front-runner for digital transformation, which has become a major part of every company’s agenda; 40 per cent of which is expected to be met by employing artificial intelligence. Smart assistants are fostering decision making procedure in diverse fields, from medicine, IT and education too.


Critical Magento SQL injection flaw could be targeted by hackers soon

Broken window with band-aid patch
Due to its popularity and the sensitive customer data it processes, the Magento platform is an attractive target for hackers and has been targeted in widespread attacks many times in the past. The number of attacks against online shops in general has increased over the past year, with some groups of hackers specializing in web skimming -- injecting rogue scripts on computers to capture credit card details. SQL injection vulnerabilities allow injecting data into or reading information from databases. Even if this particular flaw can't be used to infect a website directly, it can potentially give attackers access to accounts on a site. That access can then be used to exploit one of the other privilege escalation or code execution flaws that were patched in this release and which require authentication. "Unauthenticated attacks, like the one seen in this particular SQL Injection vulnerability, are very serious because they can be automated — making it easy for hackers to mount successful, widespread attacks against vulnerable websites," the Sucuri researchers warned. 


C# Futures: Deferred Error Handling

In order to use deferred error handling, a new compiler directive called “exception mode” is used. This switches the current function between structured exception handling and the new deferred mode. When using the deferred mode, the Exception.LastException property can be used to determine if an error has occurred. This stores only the most recent error, so if multiple errors occurred, all but the last will be lost. This has caused some concern, as it would mean one should check LastException after each line, which would be contrary to the goal of reducing the amount of code needed. To address this, an amendment to the proposal is to replace LastException with a stack is under consideration. ... The use of both structured and deferred error handling in the same function can be problematic from a compiler standpoint. Deferred mode fundamentally changes the way the code is compiled, much like how C# implements closures and async/await without CLR support.



Quote for the day"


"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell


Daily Tech Digest - March 31, 2019

Commonwealth Bank and Westpac cautious on using AI for compliance


"Regtech is at an early stage in its life cycle, but we are trying to get it mature, make it understood and get it linked into the strategic core – that is a big focus for us," Ms Cooper said. At the regtech event, a key frustration from start-ups was the time it took for new, outside technology to be considered by the ultimate decision-makers. "It's extremely hard. I've probably been in Westpac's building 30 times in the last year and a half or so," said SimpleKYC founder Eric Frost, whose system is being used by American Express to reduce customer on-boarding times. Neither he nor Westpac said they could disclose whether Westpac had started using the system. Ms Cooper said Westpac's senior management was encouraging collaboration with start-ups, but at the same time, compliance was an area where there was little room for failure. "There is a very strong strategic focus, right form the top, on partnering, not sitting in our ivory tower thinking we can build it ourselves, which creates the conditions for us to do things like 'minimum viable procurement'," she said.


Be Unreasonable in Pursuing Your Goals
The point is that you need a lot of quarters. You can’t rely on one of anything. Big dreams start with money because it’s measurable. Your parents told you to be reasonable, to play it safe. Rich people do not say "money is not everything."  Don’t be a victim. Don’t appoint blame to anyone other than yourself. Quit making excuses. Get your heart in the deal all the time. Have enough so that nothing can stop you. Embrace this thing called sales. Every business I ever started was built on making sales. No sales equal no business. You don’t need therapy, you need to take action. You don’t need to write a business plan or organize your address book. Without money coming in you’re dead in the water. The healing is in the doing, not the thinking. Sales is about doing. It’s the ask, the follow-up. Were you told not to be too persistent when you were a kid? Money doesn’t grow on trees is code for I don’t know how to bring money in. Replace the excuses with the truth. The truth is you’re lost when it comes to income. 


Wiring financial organisations for regulatory success

Technology can help tie regulations to internal processes. Structured data sets mean that it’s possible to connect the dots between policies/procedures and processes, systems, controls and products and services through structured content and ML tagging. A clear link to the broader risk-management framework, governance, and processes is necessary at all levels of the hierarchy, across both large and small companies. No longer is this something presented as a futuristic view at conferences and industry events, but a new reality which regtech is bringing to life. With the use of technology, a huge amount of data that offers significant insight into risk can be captured for evidencing and provided to regulators in a detailed structured format that is easy to understand. Needless to say, such a technology-driven holistic structured approach to data is fast becoming the only viable way to successfully manage policies and stay compliant in the current regulatory landscape.


How Insurers Can Tackle Cyber Threats in the Digital Age

Man holding umbrella walking into a laptop with lightning on the screen showing how insurers have a role in tackling cyber threats
Persistent knowledge gaps hinder the creation of effective cybersecurity cultures. Human error accounts for a significant share of cyber breaches, with phishing schemes alone is responsible for three quarters of malware hitting organizations globally, according to NTT Data. And while there’s broad recognition that cyber-attacks pose a major threat to organizations, there’s a stark divide between IT professionals and corporate leadership regarding the effectiveness of organizational protocols. In one survey, 59 percent of corporate board members said that their organizations’ cybersecurity governance practices were very effective, while only 18 percent of IT professionals agreed. Insurers can work with their clients to achieve a unified understanding of cybersecurity policies and terms. When all stakeholders operate according to a standardized cybersecurity framework, organizations can better manage risk, understand their vulnerabilities, respond to emerging threats and contain the fall-out of breaches.


30+ Powerful Artificial Intelligence Examples you Need to Know

Trading algorithms are already used successfully in world’s markets recognizing the staggering speed with which computer systems have transformed stock trading. Even though automation rules the trading world, the most complex algorithms use basic AI reasoning. Machine learning is poised to change the tradition by putting emphasis on making the decision more hard-data based and lesser grounded on trading theories. While humans will always play a role in regulation and for making the final decisions- more and more financial transactions are making their way to computer systems. Plus, given the competitive nature of this field, investment in AI and machine learning will be one of the most defining aspects of the field. Luckily, these technologies have the potential to stabilize and not disrupt, the financial industry- therefore resulting in better job stability (even reducing the probability of market crashes).


Why a Digital Mindset Is Key to Digital Transformation

sculpture of metal face
While infrastructure and technology are clearly important considerations, digital transformation is as much about the people and changing the way they approach business problems and where they look to find solutions. In fact, according to Gartner research analyst Aashish Gupta, many organizations forget to address the necessary cultural shift needed to change the mindset of workers, without which no digital transformation project is going to succeed. "The culture aspect and the technology demand equal attention from the application leader, because culture will form the backbone of all change initiatives for their digital business transformation. Staff trapped in a 'fixed' mindset may slow down or, worse, derail the digital business transformation initiatives of the company,” he said in a statement. To encourage a change in mindset from traditional to digital, Gartner has developed a four-step plan which it outlines in its report "Digital Business Requires a New Mindset, Not Just New Technology," due to be released soon.


The new third-party oversight framework: Trust but verify

There is a need to identify risk at different points in the third-party life cycle: at the commencement of the relationship, and on a regular basis thereafter, based on a number of factors that influence the risk the third party generates, such as privacy, regulatory compliance, business continuity planning, and information security. However, there also needs to be an early warning system that can alert management to a potential increase of risk outside of these scheduled assessments. This is where the link to risk appetite, key performance, and risk indicators comes into play. The risk oversight functions need to work together to build a set of factors that can assess the inherent risk associated with an activity plus any increase in risk associated with outsourcing the activity to a third party, the mitigating effect of existing measures employed by the institution and the third party to control that risk, and the determining of the remaining risk,or residual risk that the institution continues to bear.


What data dominance really means, and how countries can compete

A woman takes pictures with Nokia's new smartphone, the Lumia 1020 with a 41-megapixel camera, after its unveiling in New York July 11, 2013.    REUTERS/Shannon Stapleton   (UNITED STATES - Tags: BUSINESS TELECOMS SCIENCE TECHNOLOGY) - GM1E97C01ZA01
A lot of the current debate approaches data from the supply side, asking about ownership and privacy. These are no doubt important questions. But countries need to think deeply about the demand side: are they growing local industries that will make use of data? If not, they will find themselves forever exporting raw data and importing expensive digital services. People say data is like oil. But it isn’t, really. For one thing, data isn’t “fungible”: you can’t swap one piece of information for something else. Knowing my Amazon purchase history won’t help a self-driving car identify a stop sign. This is true even when data is the exact same type: my browsing history may not be as valuable as yours. This non-fungible nature shows up in my estimations of Facebook’s average monthly revenue-per-user, which shows that the average Canadian user generates 100 times more revenue than the average Ethiopian user. 


5 Ways Marketers Can Gain an Edge With Machine Learning

5 Ways Marketers Can Gain an Edge With Machine Learning
In the past -- and occasionally today -- these recommendations were manually curated by a human. For the past 10 years, they have often been driven by simple algorithms that display recommendations based on what other visitors have viewed or purchased. Machine learning can deliver substantial improvements over these simple algorithms. Machine learning can synthesize all the information you have available about a person, such as his past purchases, current web behavior, email interactions, location, industry, demographics, etc., to determine his interests and pick the best products or the most relevant content. Machine learning-driven recommendations learn which items or item attributes, styles, categories, price points, etc., are most relevant to each particular person based on his engagement with the recommendations -- so the algorithms keep improving over time. And machine learning-driven recommendations are not limited to products and content. You can recommend anything -- categories, brands, topics, authors, reviews vs. tech specs etc.



Why DevOps Fails: Some Key Reasons To Consider

It is important to know why culture is important. Culture is a set of practices, standards, beliefs, and structure that reinforce the organizational structure. DevOps is not only a set of tools; you must create a culture of DevOps in your organization to get the results you seek. A U.S. government agency that adopted DevOps for continuous deployment failed to identify the importance of people and process, which led to misconduct and confusion among developers and key people. ... every organization is a technology-driven organization regardless of the domains. The journey from digital transformation to continuous digital journey demanded flexibility, agility, and quality as most-focused aspects. DevOps has become a need for organizations that are associated with software delivery or often releasing an update or new features in order to serve their customers with quality and superiority. There is no doubt that DevOps can make software development faster, but every organization has a different set of requirements, and each company's DevOps adoption must be tailored to that set of requirements.



Quote for the day:


"Leadership is the other side of the coin of loneliness, and he who is a leader must always act alone. And acting alone, accept everything alone." -- Ferdinand Marcos


Daily Tech Digest - March 30, 2019

As memory prices plummet, PCIe is poised to overtake SATA for SSDs

As memory prices plummet, PCIe is poised to overtake SATA for SSDs
PCIe is several times faster and has much more parallelism, so throughput is more suited to the NAND format. It comes in two physical formats: an add-in card that plugs into a PCIe slot and M.2, which is about the size of a stick of gum and sits on the motherboard. PCIe is most widely used in servers, while M.2 is in consumer devices. There used to be a significant price difference between PCIe and SATA drives with the same capacity, but they have come into parity thanks to Moore’s Law, said Jim Handy, principal analyst with Objective Analysis, who follows the memory market. “The controller used to be a big part of the price of an SSD. But complexity has not grown with transistor count. It can have a lot of transistors, and it doesn’t cost more. SATA got more complicated, but PCIe has not. PCIe is very close to the same price as SATA, and [the controller] was the only thing that justified the price diff between the two,” he said. DigiTimes estimates that the price drop for NAND flash chips will cause global shipments of SSDs to surge 20 to 25 percent in 2019


Edge computing is real. It's here, and companies have to have a strategy to handle the enormous influx of data coming in real time from devices globally. Analysts project there will be 50 billion telematics devices by 2020 and forecast the sum of the world's data will reach 175 zettabytes by 2025. Although edge computing is putting enormous pressure on IT infrastructure -- where legacy systems at the networking, storage, and application layers are straining today -- a new generation of systems is coming to market to help companies deal with the data explosion caused by edge computing. What is most exciting is the ability these new systems give companies to engage with customers in fundamentally new ways. There are examples of new business models being developed around the edge -- Netflix, Uber, and Amazon are notable examples -- but now many companies can adopt these new business models with next-generation, edge-aware systems emerging today.


The second-biggest improvement that Microsoft has made in HoloLens 2 is that the gesture control has been revamped. If I am to be completely honest, I have never had the best luck with getting HoloLens gestures to work. I always assumed that I was doing something wrong, because nobody else that I have talked to seems to have any trouble. From what I have heard about HoloLens 2, a new artificial intelligence (AI) processor and something called a time-of-flight depth sensor will collectively make it so that HoloLens will allow you to interact with holographic objects in the same way that you would interact with their real-world counterparts. This might mean being able to pick up a hologram and move it as if it were a physical object, as opposed to having to resort to using the convoluted gestures that are currently required. It remains to be seen how this new capability will actually be implemented, but I have high hopes that using HoloLens 2 will be far more intuitive than using its predecessor.


How to eliminate the security risk of redundant data
Most enterprises migrate their data to the public cloud in that second way: they just cart it all from the data center to the cloud. Often, there is no single source of truth in the on-premises databases, so all the data is moved to the public cloud keeps all its redundancies. Although it’s an architectural no-no, the reality is that most systems are built in silos, which is where the redundancies come from. They often create their own version of common enterprise data, such as customer data, order data, and invoice data. As a result, most enterprises have several security vulnerabilities that they have inadvertently moved to the cloud. ... The best solution to this problem is to not maintain redundant data. I’m sure the CRM system has APIs to allow for secure access to customer data that can be integrated directly into the inventory system. Or, the other way around. The goal is to maintain data in a single physical location, even if accessed by multiple systems. Even if you do eliminate most of the redundant data, all your data should be secured under a holistic security system that’s consistent from application to application and from database to database.


Vulnerability management woes continue, but there is hope
Let data analytics be your guide. In other words, take all your vulnerability scanning data and analyze it across a multitude of parameters, including asset value, known exploits, exploitability, threat actors, CVSS score, similar vulnerability history, etc. This data analysis can be used to calculate risk scores, and these risk scores can help guide organization on which vulnerabilities should be patched immediately, which ones require compensating controls until they can be patched, which ones can be patched on a scheduled basis, and which ones can be ignored.  Of course, few organizations will have the resources or data science skills to put together the right vulnerability management algorithms on their own, but vendors such as Kenna Security, RiskSense, and Tenable Networks are all over this space. Furthermore, SOAR vendors such as Demisto, Phantom, Resilient, ServiceNow, and Swimlane are working with customers on runbooks to better manage the operational processes.  


7 tips for stress testing a disaster recovery plan

A disaster recovery plan is a bit like an insurance policy: we all agree we need it and we all hope we’ll never use it. And as with insurance, nobody wants to discover their DR plan doesn’t actually protect them when a disaster hits. Similarly, nobody wants to find out that their DR plan is overdone – meaning they’ve been spending too much time, money and energy maintaining it. But if you don’t regularly stress test your DR plan, you could find yourself in one of these situations. I’ve worked with a lot of businesses, and I’ve noticed that few conduct regular stress tests of their DR plans. That’s a problem: no disaster recovery plan is good enough to magically transform as a business changes – and realistically, no business remains static. At a previous firm, we tested quarterly and found changes and updates during every test! So how can you verify that your DR plan fits your current needs? Follow these seven steps.


woman with hands over face mistake oops embarrassed shy by marisa9 getty
Cisco rates both those router vulnerabilities as “High” and describes the problems like this:  One vulnerability is due to improper validation of user-supplied input. An attacker could exploit this vulnerability by sending malicious HTTP POST requests to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary commands on the underlying Linux shell as root; and the second exposure is due to improper access controls for URLs. An attacker could exploit this vulnerability by connecting to an affected device via HTTP or HTTPS and requesting specific URLs. A successful exploit could allow the attacker to download the router configuration or detailed diagnostic information. Cisco said firmware updates that address these vulnerabilities are not available and no workarounds exist, but is working on a complete fix for both. On the IOS front, the company said six of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software, one of the vulnerabilities affects just Cisco IOS software and ten of the vulnerabilities affect just Cisco IOS XE software.


VS Code Python Type Checker Is Microsoft 'Side Project'

Deemed a work in progress with no official support from Microsoft and much functionality yet to be implemented, the GitHub-based project is described as an attempt to improve on currently available Python type checkers, with mypy mentioned specifically. Of course, the increasingly popular Visual Studio Code editor already sports an increasingly popular Microsoft-backed, jack-of-all-trades Python extension (just updated) that boasts more than 35 million downloads and 7.3 million installations and does type checking and a whole lot more. But Pyright isn't aiming to compete with that tool, rather to just improve on its type-checking capabilities, which are powered by the Microsoft Python Language Server that uses the language server protocol to provide IntelliSense and other advanced functionality for different programming languages in code editors and IDEs. "Pyright provides overlapping functionality but includes some unique features such as more configurability, command-line execution, and better performance," the GitHub project says.


Tapping security cameras for better algorithm training

surveillance camera (Sensay/Shutterstock.com)
For computer vision and facial recognition systems to work reliably, they need training datasets that approximate real-world conditions. So far, researchers have had access to only a small number of image datasets, many of which are heavily populated with still pictures of fair-skinned men. This limitation impacts the accuracy of the technology when it comes across types of images it's not familiar with – those of women or people of color, for instance. Another challenge is related to the varying quality of the images on video feeds available from surveillance cameras. Often the cameras' scope and angle, as well as the lighting or weather during a given recording, make it difficult for law enforcement to track or re-identify people from security camera footage as they try to reconstruct crimes, protect critical infrastructure and secure special events. To help solve this problem, the Intelligence Advanced Research Projects Activity has issued a request for information regarding video data that will help improve computer vision research in multicamera networks.


Huawei Security Shortcomings Cited by British Intelligence

Huawei Security Shortcomings Cited by British Intelligence
The latest findings are contained in the fifth annual report to be issued by the NCSC's Huawei Cyber Security Evaluation Center, which the U.K. government launched in 2010 to review Huawei's business strategies and test all product ranges before they were potentially used in any setting that might have national security repercussions. The new report emphasizes that the findings should not imply that U.K. telecommunications networks are at any greater risk now than they were before. Rather, the findings are part of a high-level review to ensure that Britain's telecommunications networks remain as secure as possible. "We can and have been managing the security risk and have set out the improvements we expect the company to make. We will not compromise on the progress we need to see: sustained evidence of better software engineering and cybersecurity, verified by HCSEC," the NCSC spokeswoman says. "This report illustrates above all the need for improved cybersecurity in the U.K. telco networks, which is being addressed more widely by the digital secretary's review."



Quote for the day:



"Prosperity isn't found by avoiding problems, it's found by solving them." -- Tim Fargo


Daily Tech Digest - March 28, 2019

The firm's Risk in Review study said when risk management is at the top of its game, "leaders have a clear line of sight into threats for informed decision making." The report is based on a global survey of 2,073 CEOs, board members, and professionals in risk management, internal audit, and compliance, conducted in October and November 2018, and described six habits risk functions follow that help their companies set a course for sustainable growth. Digital transformations don't work well in isolation, the report said, because of the many connection points that can be exploited without proper controls. A well-thought-out and communicated digital strategy with growth targets and values anchors a risk culture. As organizations go all-in with transformations, the entire organization should prioritize items such as new technology, while risk functions set controls that map back to the strategy. In another survey recently conducted by the firm, CEOs globally said they expect the artificial intelligence (AI) "revolution to be bigger than the Internet revolution."


UK IoT research centre to tackle cyber risk


The centre’s research focus will be on the opportunities and threats that arise from edge computing, an innovative way to collect and analyse data in machine learning and artificial intelligence (AI) technology. When implemented successfully, edge computing can improve network performance by reducing latency, which is the time taken for data to traverse a system. “The centre’s ultimate aim is, by creating a trustworthy and secure infrastructure for the internet of things, to deliver a step change in socio-economic benefit for the UK with visible improvements for citizen wellbeing and quality of life,” said Jeremy Watson, Petras director and professor at University College London department of science, technology, engineering and public policy (STEaPP). “I expect productivity improvements and cost savings across a range of sectors including healthcare, transport and construction. In bringing together academics, industry technologists and government officials, our research will create accessible and relevant knowledge with clearly visible economic, societal or cultural impact that will help to cement the UK’s position as a world leader in this area.”


5 steps employers can take to retain project managers

istock-916423726frustrated.jpg
Many project managers recognize their impact on the overall morale of the company and understand the need to remain positive and put on a "good face" for their teams, sponsors, and other stakeholders. The trouble is, employers may make the assumption that what they see is what truly exists, and this can create a sense of complacency. As an employer, it's important to keep in regular contact with your project management professionals to ensure that there exist no issues impacting their job satisfaction. Although your project managers are likely to remain the continent professional and push through to ensure that their projects are executed successfully; they could experience concern in some areas, yet not feel supported enough to say anything. In fact, many project managers experience a great deal of responsibility to put the needs of others ahead of their own, sometimes to their own detriment. Take the time to regularly sit down with your project management professionals and keep up-to-date with the issues that impact their level of job and employee satisfaction.


Mashreq Bank’s Lean Agile Journey

Snowdon stated that the goal of agile was to work in a more collaborative way, to get decisions closer to the customer, and to provide a better structure so that they could more quickly respond to the customer-driven demand, rather than push products/services at them. Capaldi stated, "I believe in kaizen and kaikaku as central concepts all companies must value; I will therefore only get involved in a transformation if I see these. In this case I was pleasantly surprised by the passion Steve and his team had in wanting to fully understand agile and these concepts were clearly there, and the fact they also come from a lean background just like me helped." "The head of the division was also massively behind the transformation and we very quickly agreed the metrics that would track progress," Capaldi mentioned. He said that agile is a journey; he prefers to challenge his clients in that they aren’t really trying to be agile, and that it’s ok to start with "fake agile". Fake it till you make it!, he said


Mind the overlap between GDPR and ePD, warns privacy lawyer


According to Ustaran and Campion, as the digital economy progresses, European data protection law is likely to lead to a more harmonised approach to its interpretation and enforcement, as reflected by the EDPB’s opinion. However, the situation going forward it far from clearcut as the ePD was initially intended to be replaced by the proposed European ePrivacy Regulation (ePR) in May 2018, but then was expected to be implemented at some point in 2019 and now looks likely to take a little longer. “The whole e-Privacy Directive / forthcoming Regulation and GDPR debate is one of the most complex legal conundrums going on at the moment in this space,” Ustaran told Computer Weekly. “The recent EDPB opinion is very helpful in terms of understanding the regulators’ thinking, but where the e-Privacy Regulation fits in is a big missing piece,” he said. According to Ustaran, the e-Privacy Regulation is unlikely to be fully effective before 2020, given that the European Council has not decided on a preferred draft, which will then need to be discussed in detail with the European Parliament and the European Commission before being formally adopted.


Site reliability engineer shift creates IT ops dilemma


In some ways, the transitional struggle described by the SREcon attendee is unavoidable, according to experienced SREs who presented here this week. "If you talk to experienced veterans in the field, they might get a faraway look in their eye and say, 'Oh, yes, I remember that,'" said Jaren Glover, infrastructure ghostwriter at Robinhood, a fintech startup in Palo Alto, Calif. "A bit of this pain is par for the course." There are, unfortunately, no easy solutions to the problem, SREs said, though support from employers to hire new engineers and scale up site reliability engineer teams is crucial. "It's also a matter of prioritization," said Arnaud Lawson, senior infrastructure software engineer at Squarespace, a website creation company in New York, in an interview after his SREcon presentation on service-level objectives. "Even if 80% of the team is dedicated to firefighting, the rest can tap into automation to get rid of tedious work." At large enough companies, such as the professional networking site LinkedIn, SREs are sometimes repurposed from other teams to help those that struggle to meet team performance targets or who are overwhelmed by pager alerts.


Shared learning: Establishing a culture of peers training peers

Shared learning: Establishing a culture of peers training peers
“After you walk your teammates through how you apply a skill, let them test it out on their own to see whether they can repeat the process you used and achieve the same or a similar result,” he says. With so many organizations relying on technology for training, this hands-on aspect is key. “We’re moving from a world where just watching online tutorials and going to classes was enough to one that emphasizes experiential learning. Just knowing isn’t enough — it’s about doing,” Schawbel says. “If you’re lucky, your organization will give you access to learning, training, educational materials or subscriptions to various resources, but they aren’t actually providing the hands-on, peer-to-peer learning, mentorship, situational and project-based knowledge.” ... Once your coworkers have attempted to complete a task using the skill you taught them, review it, Schawbel says, but understand that nowadays, people don’t even like using the word “feedback,” and prefer “suggestions for improvement.” Here, the key is starting with the positive.


Understanding the role and need of a data protection officer

The DPO works alongside of the other C-suite officers at your firm and maintains Data Protection Authority rules and regulations. This means that they should be expert or well-versed in the GDPR and all of its requirements, but it also means that the DPO needs to understand other jurisdictional requirements around the world in places your business operates. This responsibility is a serious one, and you should review the information available at the International Association of Privacy Professionals (IAPP) for further clarity. The IAPP is the world’s largest information privacy community and provides comprehensive data privacy and regulatory certification training. Because you have gotten this far, you must believe that your business has opportunities to create value through your data and data partnerships. You have also certainly noticed the seemingly daily disastrous headlines about data breaches plaguing companies. There have been hundreds of different data breaches involving more than 30,000 records each; some of these breaches affected hundreds of millions of data subjects. 


Identifying exceptional user experience (UX) in IoT platforms

Industry 4.0 / Industrial IoT / Smart Factory
Enterprises should pick IoT platforms with superlative access to on-platform configuration functionality with an emphasis on declarative interfaces for configuration management. Although many platform administrators are capable of working with RESTful API endpoints, good UX design should not require that platform administrators use third-party tools to automate basic functionality or execute bulk tasks. Some programmatic interfaces, such as SQL syntax for limiting monitoring views or dashboards for setting event processing trigger criteria, are acceptable and expected, although a fully declarative solution that maintains similar functionality is preferred. ... In general, the UX should be focused on providing information immediately required for the execution of day-to-day operational tasks while removing more complex functionality. These platforms should have easy access to well-defined and well-constrained operational functions or data visualization. An effective UX should enable easy creation and modification of data views, graphs, dashboards, and other visualizations by allowing operators to select devices using a declarative rather than SQL or other programmatic interfaces.


How IoT can transform four industries this year

"Among providers, IoT enablement will be leveraged toward the triple aim of cost, quality, and population health," Khaled said. Simple, embedded digital tools are already being piloted at large scale to mitigate infection risk around replaceable medical instruments, while smart threads and sticker or patch sensors have improved in their fidelity, tracking everything from cardiac readouts to body chemistry and sleep patterns. Among payers, IoT presents a distinct opportunity to enable smarter population risk management and accompanying reimbursement rate adjustments. IoT-enabled, long-term care facilities will be able to negotiate better rates if their sensor data supports fall risk and infection likelihood mitigation, Khaled said. The growing ecosystem of wearable fitness devices will help insurers recognize members who are (literally) taking steps to actively change their individual risk. IoT technologies supporting patient medication adherence will help both of these groups see major cost-saving and health improvement opportunities.



Quote for the day:


"True success is a silence inner process that can empower the mind, heart and soul through strong aspiration." -- Nur Sakinah Thomas


Daily Tech Digest - March 27, 2019

5 things you can do in 5 minutes to boost your internet privacy


For websites and services where you need to ensure the security of your account, like your bank, passwords alone simply are not enough anymore. In this scenario, you need two-factor authentication (2FA) -- specifically, the kind where a mobile app generates login codes for you. Not the kind where you are sent an SMS text message, because those can be intercepted or just fail to arrive. With app-based 2FA, you log into an app or website like normal, then you open an app that generates a special six-digit code every 30 seconds. This authentication app is synced with the other app or service so that your code matches the one that the main app or service expects to get. You enter the code from the authenticator app into the app or website that's asking for it, and then your login is complete. Google makes its own free authenticator app for iOS and Android. Unfortunately, there isn't a standardized method for setting up your account with 2FA. Amazon, PayPal, eBay and your bank will all use slightly different systems and terminology.



Scaling Microservices: Identifying Performance Bottlenecks

A bottleneck is the point in a system where contention occurs. In any system, these points usually surface during periods of high usage or load. Once identified, the bottleneck may be remedied bringing performance levels into an acceptable range. Utilizing synthetic load testing enables you to test specific scenarios and identify potential bottlenecks, although this only covers contrived situations. In most cases, it is better to analyze production metrics and look for outliers to help identify trouble on the horizon. Key performance indicators from your application include request/sec, latency, and request duration. Indicators from the runtime or infrastructure also include CPU time, memory usage, heap usage, garbage collection, etc. This list isn't inclusive, there may be business metrics or other external metrics which may factor into your optimizations as well.


The devil is in the data, but machine learning can unlock its potential

The devil is in the data image
Effective data governance can enable intelligent real-time business decision-making that will, in turn, drive organisations in a more profitable direction. One of the best approaches when it comes to unleashing big data’s potential is investing in a data lake: a central repository that allows organisations to collect everything — every bit of data, regardless of its structure and format — which can then be accessed, normalised, explored and enriched by users across multiple business units to reveal patterns across a shared infrastructure. The advantage of this approach is that organisations can gain end-to-end visibility of the enterprise data and actionable business insights. The disadvantage is that the data has to be kept up to date, which takes time and effort. Another downside is the GDPR compliance and data security risks that are associated with depositing the entirety of an organisation’s business-critical data into a data lake.


Insights for your enterprise approach to AI and ML ethics

The promise of AI is in augmenting and enhancing human intelligence, expertise and experience. Think helping a aircraft mechanic make better, more accurate and more timely repairs – not automating the mechanic out of the picture. But the scope of what you can do is tempered by inherent limitations in today’s AI systems. I like to frame this as a recognition that computers don’t “understand” the world the way we do (if at all). I don’t want to get into an epistemological discussion about the definition or nature of understanding, but here’s what I think is a very illustrative and accessible example. One common application of AI is in image processing problems. i.e., I show the machine an image – like what you might take with your phone – and the machine’s task is to report back what’s in the image. You build a system like this by shoving in thousands or millions or even billions of images to an AI program (such as a neural network) – you might hope that somehow, as a result of processing all of these images the software builds some kind of semantic representation of the world.


Alibaba's UC Browser can be used to deliver malware


Dr Web researchers note that for now UC Browser represents a "potential threat" but warn that all users could be exposed to malware due to its design.  "If cybercriminals gain control of the browser's command-and-control server, they can use the built-in update feature to distribute any executable code, including malware. Besides, the browser can suffer from MITM (man-in-the-middle) attacks," the security company notes. The MITM threat arises because UCWeb committed the security blunder of delivering updates to the browser over an unsecured HTTP connection. "To download new plug-ins, the browser sends a request to the command-and-control server and receives a link to file in response. Since the program communicates with the server over an unsecured channel (the HTTP protocol instead of the encrypted HTTPS), cybercriminals can hook the requests from the application," explains Dr Web.  "They can replace the commands with ones containing different addresses. ... "


Deep Learning for Speech Synthesis of Audio from Brain Activity

In three separate experiments, research teams used electrocorticography (ECoG) to measure electrical impulses in the brains of human subjects while the subjects listened to someone speaking, or while the subjects themselves spoke. The data was then used to train neural networks to produce speech sound output. The motivation for this work is to help people who cannot speak by creating a brain-computer interface or "speech prosthesis" that can directly convert signals in the user's brain into synthesized speech sound. The first experiment, which was run by a team at Columbia University, used data from patients undergoing treatment for epilepsy. The patients had electrodes implanted in their auditory cortex, and ECoG data was collected from these electrodes while the patients listened to recordings of short spoken sentences. The researchers trained a deep neural-network (DNN) using Keras and Tensorflow using the ECoG data as the input and a vocoder/spectrogram representation of the recorded speech as the target.


An inside look at Tempo Automation's IIoT-powered ‘smart factory’
“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained Shashank Samala, Tempo’s co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.” Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses Amazon Web Services (AWS) GovCloud to network everything in a bi-directional feedback loop. “After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production.



Using value stream management and mapping to boost business innovation

Value stream mapping purists may argue that the above exercise is not the real process because traditional components such as the time metrics, activity ratios and future state were omitted. Fear not, these components are included in a full-blown formal value stream mapping exercise. However, teams such as Thrasher’s have made substantial improvements from shorter versions of the exercise by making work visible. The net result is a compelling change in the right direction. Value stream management is the practice of improving the flow of the activities that deliver and protect business value -- and prove it. It’s a nascent digital-concept that measures work artifacts in real time to visualize the flow of business value and expose bottlenecks to optimize business value. A significant strength of this practice centers around how and where work is undertaken. This activity is captured through the work items mentioned above in the toolchain, providing a traceable record of how software is planned, built and delivered.


Redis in a Microservices Architecture

Redis in a Microservices Architecture
Redis can be widely used in microservices architecture. It is probably one of the few popular software solutions that may be leveraged by your application in so many different ways. Depending on the requirements, it can act as a primary database, cache, or message broker. While it is also a key/value store we can use it as a configuration server or discovery server in your microservices architecture. Although it is usually defined as an in-memory data structure, we can also run it in persistent mode. ... If you have already built microservices with Spring Cloud, you probably have some experience with Spring Cloud Config. It is responsible for providing a distributed configuration pattern for microservices. Unfortunately, Spring Cloud Config does not support Redis as a property source's backend repository. That's why I decided to fork a Spring Cloud Config project and implement this feature. I hope my implementation will soon be included into the official Spring Cloud release, but, for now, you may use my forked repo to run it. It is available on my GitHub account: piomin/spring-cloud-config. 


Data visualization via VR and AR: How we'll interact with tomorrow's data

virtualitics.jpg
Data visualization in VR and AR could be the next big use case for the technologies. It's early days, but examples of 3D data visualizations hint at big changes to come in how we interact with data. Recently, I spoke with Simon Wright, Director of AR/VR for Genesys, about one such experiment. Genesys helps companies streamline their customer service experience with automated phone menus and chatbots, for example, but in his role Wright has a lot of latitude to push the boundaries of Mixed Reality technologies for enterprise customers. "One of the things I'm personally excited about is the ability to create hyper visualizations," Wright tells me. "We capture massive amounts of data, and we've created prototypes to almost magically bring up a 3D model of Genesys data. This is where there could be huge opportunities for AR, which has advantages over a 2D screen." For one recent project, Wright and his team wanted to project data pertaining to website analytics onto the wall of a restaurant in a beautiful way. "It started as a marketing-led project," he explains.




Quote for the day:


"Leadership to me means duty, honor, country. It means character, and it means listening from time to time." -- George W. Bush