Daily Tech Digest - May 24, 2018

Fintech is disrupting big banks, but here’s what it still needs to learn from them

Fintech is disrupting big banks, but hereĆ¢€™s what it still needs to learn from them
As a general rule, fintech’s priorities lean more toward customer convenience than risk management. The sector’s value proposition is based largely on its ability to say yes where traditional finance would say no, allowing more people to take out loans, open credit cards, and open checking accounts than ever before. Just like tech startups that are funded by venture capital, fintechs also place a premium on growth, which makes turning down a potential customer due to credit risk (or any other factor) painful, but essential for sustainable growth. Though it’s definitely possible to grow while managing risk intelligently, it’s also true that pressure to match the “hockey-stick” growth curves of pure tech startups can lead fintechs down a dangerous path. Startups should avoid the example of Renaud Laplanche, former CEO of peer-to-peer lender Lending Club, who was forced to resign in 2016 after selling loans to an investor that violated that investor’s business practices, among other accusations of malfeasance. It’s not just financial risk that they may manage badly: the sexual harassment scandal that recently rocked fintech unicorn SoFi shows that other types of risky behavior can impact bottom lines, too.

How to mitigate the complexity of microservice communication

"The biggest single challenge arises from the fact that, with microservices, the elements of business logic are connected by some sort of communication mechanism … rather than direct links within program code," said Randy Heffner, principal analyst at Forrester. This means there are more opportunities for errors in network and container configurations, errors in request or response, network blips, and errors in security configurations, configs and more. In other words, there are simply many more places where things can go wrong with microservice communication. It's also much more challenging to debug application logic that flows across multiple microservices. In a monolithic app, a developer can embed multiple debug and trace statements in code that will all automatically go into one log. With microservices, developers need to collect logs and other debugging outputs. They then need to correlate those logs and outputs into a single stream in order to debug a collection of microservices that interact. This is even harder to do when multiple streams of testing are active in an integration testing environment.

27 Incredible Examples Of AI And Machine Learning In Practice

27 Incredible Examples Of Artificial Intelligence (AI) And Machine Learning In Practice
With approximately 3.6 petabytes of data (and growing) about individuals around the world, credit reference agency Experian gets its extraordinary amount of data from marketing databases, transactional records and public information records. They are actively embedding machine learning into their products to allow for quicker and more effective decision-making. Over time, the machines can learn to distinguish what data points are important from those that aren’t. Insight extracted from the machines will allow Experian to optimize its processes.  American Express processes $1 trillion in transaction and has 110 million AmEx cards in operation. They rely heavily on data analytics and machine learning algorithms to help detect fraud in near real time, therefore saving millions in losses. Additionally, AmEx is leveraging its data flows to develop apps that can connect a cardholder with products or services and special offers. They are also giving merchants online business trend analysis and industry peer benchmarking.

Skills shortage a major cyber security risk

Security industry leaders are increasingly putting emphasis on cyber resilience based on good detection and response capabilities, rather than relying mainly on defence technologies and controls. “These results reflect the difficulty in defending against increasingly sophisticated attacks and the realisation breaches are inevitable – it’s just a case of when and not if,” said Piers Wilson, director at the IISP. “Security teams are now putting increasing focus on systems and processes to respond to problems when they arise, as well as learning from the experiences of others.” When it comes to investment, the survey suggests that for many organisations, the threats are outstripping budgets in terms of growth. The number of businesses reporting increased budgets dropped from 70% to 64% and businesses with falling budgets increased from 7% up to 12%. According to the IISP, economic pressures and uncertainty in the UK market are likely to be restraining factors on security budgets, while the demands of the General Data Protection Regulation (GDPR) and other regulations such as Payment Services Directive (PSD2) and Networks and Information Systems Directive (NISD) are undoubtedly putting more pressure on limited resources.

Talos finds new VPNFilter malware hitting 500K IoT devices, mostly in Ukraine

While the researchers have said that such a claim isn't definitive, they have observed VPNFilter "actively infecting" Ukrainian hosts, utilising a command and control infrastructure dedicated to that country. The researchers also state VPNFilter is likely state sponsored or state affiliated. As detailed by the researchers, the stage 1 malware persists through a reboot, which normal malware usually does not, with the main purpose of the first stage to gain a persistent foothold and enable the deployment of the stage 2 malware. "Stage 1 utilises multiple redundant command and control (C2) mechanisms to discover the IP address of the current stage 2 deployment server, making this malware extremely robust and capable of dealing with unpredictable C2 infrastructure changes," the researchers wrote. The stage 2 malware possesses capabilities such as file collection, command execution, data exfiltration, and device management; however, the researchers said some versions of stage 2 also possess a self-destruct capability that overwrites a critical portion of the device's firmware and reboots the device, rendering it unusable.

AWS facial recognition tool for police highlights controversy of AI in certain markets

Amazon had also touted the ability to use Rekognition with footage from police body camera systems, though the ACLU notes that mentions of this type of interaction were scrubbed from the AWS website "after the ACLU raised concerns in discussions with Amazon," adding that this capability is still permissible under Amazon's terms of service. This change "appears to be the extent of its response to our concerns," according to the ACLU. Naturally, using cloud services to build a panopticon is likely to generate concern among residents of the localities that have deployed the technology. Under optimal circumstances, this would be implemented following a referendum or, at a minimum, a period of public comment about combining surveillance technology with mass facial recognition. The ACLU sought documents indicating that any such outreach was attempted, though no documents were discovered. It does, however, point out the existence of an internal email from a Washington County employee stating that the "ACLU might consider this the government getting in bed with big data."

DevOps is a culture, but here's why it's actually not

For DevOps to continue to grow, though, we must put the idea that DevOps is a culture aside. That is simply not sufficient and can cause a take everything or nothing approach. DevOps is a transformation process and a collaboration philosophy, and this particular definition comes with different approaches and different criteria for success. It is time to set up standards to help people imagine practical goals and adopt new norms we can all share. Instead of an all or nothing approach, standards unify people and organizations around unique goals that are independent from the used technology, the team size, priority or any other criterion. Setting up standards can also be an iterative process. Take time to think through and grow standards that can continuously shape the interaction between developers, ops, code and servers. And make sure these DevOps standards give the different stakeholders the time to experiment, learn and provide feedback. The 12-factor apps, cloud native or Consortium for IT Software Quality standards are some good examples to follow and consider for iteration.

AI boosts data center availability, efficiency

data center technician
AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security. Inside data-center facilities, there are increasing numbers of sensors that are collecting data from devices including power back-up (UPS), power distribution units, switchgear and chillers. Data about these devices and their environment is parsed by machine-learning algorithms, which cull insights about performance and capacity, for example, and determine appropriate responses, such as changing a setting or sending an alert. As conditions change, a machine-learning system learns from the changes – it's essentially trained to self-adjust rather than rely on specific programming instructions to perform its tasks. The goal is to enable data-center operators to increase the reliability and efficiency of the facilities and, potentially, run them more autonomously. However, getting the data isn’t a trivial task.

The path to explainable AI

binary code displayed across an artificial face
Traceability also addresses several challenges in AI’s implementation. First, it focuses on the quality in new emerging applications of this advanced technology. Second, in the evolution of human and machine interactions, traceability makes answers more understandable by humans, and helps drive AI’s adoption and related change management necessary for successful implementations. Third, it helps drive compliance in regulated industries such as life sciences, healthcare, and financial services. Traceability exists in some more mature AI applications like computational linguistics. In other emerging technologies that are less mature, the so-called black box problem still tends to appear. This mostly occurs in the context of deep neural networks, machine learning algorithms that are used for image recognition, or natural language processing involving massive data sets. Because the deep neural network is established through multiple correlations of these massive data sets, it is hard to know why it came to a particular conclusion, for now. Companies need a more comprehensive governance structure, especially with these advanced technologies like neural networks that do not permit traceability.

Discussions on the Future of .NET Core

One of the major weaknesses today of .NET Core is the misunderstandings that come with it. Countless developers are still asking, "What's the difference between .NET Core, .NET Standard and .NET Framework". Likewise, which one should they choose and why. The choices aren't always easy or clear. For example it is actually possible to have a .NET Core application that targets the .NET Framework – which if you think about it is really confusing, because we know that both the .NET Framework and .NET Core are runtime implementations of .NET Standard. The .NET Core terminology is overloaded. There are .NET Core applications, .NET Core CLI, .NET Core SDK and .NET Core runtimes. I believe there is much room for improvement with regards to making all of this easier to digest and use. There’s still some work to be done on the performance side of things. Kestrel, the ASP.NET Core web server, performs extremely well in the TechEmpower “plaintext” benchmark, but not so well in the higher-level tests involving database queries and the like. Much of the code that’s been migrated over from the full-fat .NET Framework could be improved a lot in that regard. The great thing is now people are diving into the code and digging these things out.

Quote for the day:

"Prosperity isn't found by avoiding problems, it's found by solving them." -- Tim Fargo

Daily Tech Digest - May 23, 2018

12 Frequently Asked Questions on Deep Learning

Feature engineering is a process of putting domain knowledge into the creation of feature extractors to reduce the complexity of the data and make patterns more visible to learning algorithms to work. This process is difficult and expensive in terms of time and expertise. In Machine learning, most of the applied features need to be identified by an expert and then hand-coded as per the domain and data type. For example, features can be pixel values, shape, textures, position and orientation. The performance of most of the Machine Learning algorithm depends on how accurately the features are identified and extracted. Deep learning algorithms try to learn high-level features from data. This is a very distinctive part of Deep Learning and a major step ahead of traditional Machine Learning. Therefore, deep learning reduces the task of developing new feature extractor for every problem. Like, Convolutional NN will try to learn low-level features such as edges and lines in early layers then parts of faces of people and then high-level representation of a face.

No CS degree? For skilled developers, 75% of hiring managers don't care

Strong work experience is the most important qualification that recruiters and hiring managers look for when filling tech positions, the report found. However, resume-bolstering factors like degree, prestige, and skill keywords are not accurate predictors of future job success, according to the report. Instead, hiring managers and recruiters are looking to indicators that demonstrate ability, such as previous work experience, years of work, and personal projects, which get closer at measuring a candidate's skills. ... Hiring managers' top three measures of success in recruiting were quality of candidate, future performance success, and employee retention, the report found. Failing to align on skills and expectations for candidates are two of the top hurdles facing hiring managers when it comes to working with recruiters, the report found. To solve this problem, recruiters should regularly check in with hiring managers to understand the nuances of the technical skills hiring managers are looking for in each open role. For example, what are the crucial must-have skills for a fullstack developer versus a back-end developer? This can help narrow down the pool of qualified candidates.

Doctors are using AI to see how diseases change our cells

This model can predict where these organelles will be found in any new cell, so long as it’s provided with an image from a microscope. The researchers also used AI to create a probabilistic model that takes its best guess at where one might expect to find those same organelles if provided with a cell’s size and shape, along with the location of its nucleus. These models are useful for doctors and scientists because they provide a close-up look at the effects of cancer and other diseases on individual cells. By feeding the AI with data and images of cancer cells, they can get a more complete picture of how the cell, and its individual components, are affected. And that can indicate how doctors can help each patient with treatment tailored to their disease. The team from the Allen Institute hopes their tools can help democratize medical research, improving healthcare in underserved areas. So the researchers are working to improve them, creating more complete models, according to NPR. They hope to have a broader database, full of models of more cells, available over the next few months.

Everything you need to know about the new general data protection regulations

GDPR applies to any organisation operating within the EU, as well as any organisations outside of the EU which offer goods or services to customers or businesses in the EU. That ultimately means that almost every major corporation in the world will need to be ready when GDPR comes into effect, and must start working on their GDPR compliance strategy. There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation. A controller is "person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of processing of personal data", while the processor is "person, public authority, agency or other body which processes personal data on behalf of the controller". If you are currently subject to the UK's Data Protection Act, for example, it's likely you will have to look at GDPR compliance too.

You’ve probably been hiring the wrong kind of data scientist

A lot of people like to call themselves data scientists because they’re using point-and-click tools, like Tableau and Excel, to perform data analysis and visualization in order to gain business insights. ... The real challenge comes from handling large datasets, including textual or other unstructured raw data, and doing so in real time–all of which requires programmatic execution. That is, coding. Indeed, many of the gains in AI and data science are thanks to what researchers are calling the “Unreasonable Effectiveness of Data”–being able to learn programmatically from astronomical data sets. This work is also highly nuanced and detailed, and doing the wrangling and cleaning properly is crucial for developing effective machine intelligence later on. Point-and-click software just isn’t sophisticated enough to substitute for good programming skills (after all, you can perform machine learning with Excel). This goes beyond just the usual mantra of “garbage in, garbage out.” Employers are trying to manage turbocharged public relations on social media while staying in regulators’ good graces despite that enhanced scrutiny.

A Simple and Scalable Analytics Pipeline

The core piece of technology I’m using to implement this data pipeline is Google’s DataFlow, which is now integrated with the Apache Beam library. DataFlow tasks define a graph of operations to perform on a collection of events, which can be streaming data sources. This post presents a DataFlow task implemented in Java that streams tracking events from a PubSub topic to a data lake and to BigQuery. An introduction to DataFlow and it’s concepts is available in Google’s documentation. While DataFlow tasks are portable, since they are now based on Apache Beam, this post focuses on how to use DataFlow in conjunction with additional managed services on GCP to build a simple, serverless, and scalable data pipeline. The data pipeline that performs all of this functionality is relatively simple. The pipeline reads messages from PubSub and then transforms the events for persistence: the BigQuery portion of the pipeline converts messages to TableRow objects and streams directly to BigQuery, while the AVRO portion of the pipeline batches events into discrete windows and then saves the events to Google Storage.

7 risk mitigation strategies for the cloud

7 risk mitigation strategies for the cloud
“Cloud services often encourage ‘casual use’ of data; I can collect, search and store anything just about anywhere” is the hook, says John Hodges, vice president of product strategy for AvePoint. “We often see this in systems like Box, DropBox or OneDrive, where there is a real mixed-use danger in how content is stored and shared.” The simple solution? Prohibit services where mixed-use is likely to be a problem. ... Zero trust is an IT security strategy wherein an organization requires every user, system or device inside or outside its perimeter to be verified and validated before connecting to its systems. How can you use a zero trust model to mitigate cloud risk? For Insurity, an organization that specializes in property and casualty insurance services and software, a zero trust approach means restricting access tightly. “We provide logical access to the minimum set of users with a minimum set of rights and privileges in line with job function requirements. This control is audited internally by our Enterprise Security team and externally as part of our annual SOC audit,” says Jonathan Victor, CIO of Insurity. Regularly examine user access levels and ask yourself whether they make sense.

What Is Microservices? An Introduction to Microservice Architecture

Now, let us look at a use-case to get a better understanding of microservices. Let's take a classic use case of a shopping cart application. When you open a shopping cart application, all you see is just a website. But, behind the scenes, the shopping cart application has a service for accepting payments, a service for customer services and so on. Assume that developers of this application have created it in a monolithic framework. So, all the features are put together in a single code base and are under a single underlying database. Now, let's suppose that there is a new brand coming up in the market and developers want to put all the details of the upcoming brand in this application. Then, they not only have to rework the service for new labels, but they also have to reframe the complete system and deploy it accordingly. To avoid such challenges developers of this application decided to shift their application from a monolithic architecture to microservices.

Cybercriminals Battle Against Banks' Incident Response

Persistent attackers aren't backing down when banks detect them and launch their incident response processes, either. One in four bank CISOs in the Carbon Black study say their institution faced attackers fighting back when they got spotted, trying to deter defenses and the investigation into the attack. "They are leaving wipers or destructive malware to inhibit [IR], deleting logs, and inhibiting the capacity of forensics tools," for example, says Tom Kellermann, chief cybersecurity officer at Carbon Black. "Sometimes they are using DDoS to create smokescreens during events." These counter-IR activities are forcing banks to be be more proactive and aggressive as well, he says. "They need to have threat hunting teams. You can't just rely on telemetry and alerts." While banks are often relying on their IR playbooks, attackers have the freedom to freelance and counter IR activities. They're changing their malware code on the fly when it gets detected, deleting activity logs to hide their tracks, and even targeting bank security analysts and engineers to help their cause.

Certain types of content make for irresistible phishes

It used to be that fear, urgency and curiosity were the top emotional motivators behind successful phishes. Now they’ve been replaced by entertainment, social media and reward/recognition. According to the company, simulated eCards, internal promotion/reward programs, and a number of financial and compliance scenarios (e.g., phishes with “Financial Information Review” or “Compliance Training” in the subject line) are most successful at getting users to click. Employees should be trained to be aware of their emotional reactions to emails and see them as phishing triggers. “When creating simulations, remember consumer scams—those phony Netflix or LinkedIn emails sent to busy employees, who are glad to switch gears and click on something fun,” the company notes. “Understand the dynamics of entertainment or social phishing (think uncritical social acceptance and shortened URLs).” And when it comes to emails promising rewards, employees should be taught to be critical of rewards and deals that sound too good to be true.

Quote for the day:

"You don't have to hold a position in order to be a leader." -- Anthony J. D'Angelo

Daily Tech Digest - May 22, 2018

Smart Homes of Tomorrow – This Is Why We Can’t Have Nice Things

What new design concepts are needed to address these emerging technologies and risks presented by the ever-connected smart home? It not about informing everyone that everything can be hacked. It’s about creating awareness and helping others understand the risks involved with modern technology. It’s about helping the builders and designers understand the technological solutions required by their clients and how to implement them correctly, so they too can educate the user on how to maintain their system safely and securely. Consumers need to be aware of their devices’ remote and local environment, and how their data is collected and stored. They also need to be aware of how their personal devices and appliances can be affected by outages outside of their control, like a DDoS attack on a cloud environment or something as simple as a power outage. Finally, we as a community need to put pressure on the manufacturers to produce secure devices with clear plans on how to patch and mitigate future vulnerabilities. Manufacturers also have to begin working together to insure user data and integrate it in a crowded environment of smart links and physical devices, ultimately preventing remote access.

Legit tools exploited in bank heists

Native tools such as PowerShell and Windows Management Instrumentation (WMI) grant users exceptional rights and privileges to carry out the most basic commands across a network. These “non-malware” or fileless attacks account for more than 50% of successful breaches, the report said, with attackers using existing software, allowed applications and authorised protocols to carry out malicious activities. In this way, attackers are able to gain control of computers without downloading any malicious files and therefore remain unnoticed by malware-detection security systems.  ... Finally, PowerShell is used to connect to a command and control server to download a malicious PowerShell script designed to find sensitive data and send it to the attacker, all without downloading any malware. Almost every Carbon Black customer (97%) was targeted by a non-malware attack during each of the past two years, but the report notes that awareness of malicious usage for tools such as PowerShell has never been higher, with 90% of CISOs reporting seeing an attempted attack using PowerShell.

4 best practices for tapping the potential of prescriptive analytics

One potential value of prescriptive analytics is that you don’t necessarily need a ton of data to reach the best decision or outcome. Prescriptive analytics focuses the question you’re asking, and the decisions you’re trying to reach, to one tangible answer using a smart model of your business that is not dependent on the amount of data (how much or how little) that you have. Predictive techniques and functionalities can be great at identifying a multitude of options through statistical modeling and forecasting, as long as you have the relevant data—but that’s precisely the problem. It’s difficult to process and synthesize numerous options and the nuanced differences among them to determine what you should actually do. How can you be sure that you’re making the best decision? How can you be sure of the impact it will have on your company? Prescriptive analytics can involve hundreds of thousands of tradeoffs associated with a question you might have, and it uses the available data to identify the best decision and impact relative to the goal you’re trying to achieve. 

Technological advancements are changing the landscape for new and existing businesses alike

With Brexit looming over both small and large businesses operating within the UK, maintaining a competitive price point despite new tariffs will be a tough challenge to overcome. However, with new technological solutions opening up greater levels of efficiency and productivity this task should not appear as daunting as many believe. Tech companies will continue to innovate new solutions for both current and future business issues, finding new ways to improve efficiency within the business environment. Voice recognition technology such as iBOB, for instance, is now freeing up valuable time for business owners and receptionists. It is reducing the need for customer service interactions on the phone, such as appointment bookings, which is making the human impact much more impactful and is allowing small businesses to focus their resources on more profitable aspects of their business. Employing affordable technological solutions, with an aim to focus added work hours on tasks more closely related to the bottom line, will allow existing businesses to maintain their competitive position within the market.

Cultural change more important than tech when implementing big data

“It is not your data or my data; it is the firm’s data, and the value you create for the business is from that data,” Tewary explained. “It is a transformation. It’s changing the people culture aspect, so there’s a lot of education. You know, you have to be an evangelist. You wear multiple hats to show people the value.” For Tewary at Verizon, finding advocates within the company for sharing big data was crucial. “We found champions,” he said. “For example, finance … was a good champion for us, where we used the data and analytics to really actually launch some very critical initiatives for the firm — asset-backed securities. … That created the momentum.” Dobrin agreed with this strategy of using champions within an enterprise to help lead the way for the entire company. “It’s not just a jump to the top of the ladder, because there’s just a lot of work that’s required to do it. You can do that within a business unit.” While the whole enterprise doesn’t need to get there all at the same time, as other areas of the enterprise begin to see the use of big data and how it can change the game, they will be open to the idea, Dobrin explained.

What is SDN? How software-defined networking changed everything

To stay competitive in networking and to avoid being obsoleted by history, network equipment vendors have either blazed the trails for SDN, or found themselves adopting SDN reluctantly, perhaps looking a little singed in the process. One vendor clearly in the former camp, not the latter, is Juniper Networks. It plunged into the SDN field during the fateful year of 2012, first by purchasing a firm called Contrail, and then by building it into an open-source virtual appliance ecosystem unto itself: OpenContrail. As the diagram above depicts, OpenContrail serves as a device that provides the routing logic for distributed operating systems that host Docker containers. ... "It's a big part of operating and automating both a virtual and a physical infrastructure. It orchestrates the VNFs [virtual network functions] and puts together the service chains, all the way to the edge and to the core. Contrail uses vRouter and, in a distributed data center infrastructure, reach into any part of the cloud, string up the required VNFs, stitch together the different pieces of the service, and deliver a custom service to a certain vertical, or a group of end customers. It automates that whole process of customizing the services that can be offered, ultimately, to our service provider customers."

How can you categorize consumers who keep remaking themselves?

Perhaps there won’t be a “mass market” for consumer goods anymore; just a mass of individuals who are increasingly difficult to categorize, and who reinvent themselves from moment to moment, from platform to platform. People will still want to gather in groups with like-minded people. But they will find them through technology and data and connect with them based on their shared values and interests rather than practical connections, such as living in the same area. Rather than being defined by markers such as gender, age or location, they will express themselves in ways that are more fluid and flexible. In one of the future worlds we modeled at our hack week in Berlin, these groups – or “tribes” – broke down physical borders and formed their own communities, both real and virtual. They started to pool their purchasing power and demand a different relationship with brands. Today, consumer-facing companies try to tailor offers and discounts that will appeal to individual consumers, based on their purchasing data – with varying degrees of skill and success. In the future, will products themselves be individualized?

Containers and microservices and serverless, oh my!

Instead of using containers to run applications, serverless computing replaces containers with another abstraction layer. Its functions or back-end services are one-job programs, which use compute resources without worrying the developer. Instead of calling functions in the traditional sense, in serverless, a developer calls a working program to provide a service for the program they're building. The Cloud Native Computing Foundation (CNCF) Serverless Working Group defines serverless computing as "building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.” Or for another definition: "Serverless architectures refer to applications that significantly depend on third-party services,” says Mike Roberts, engineering leader and co-founder of Symphonia, a serverless and cloud architecture consultancy. “By using these ideas, and by moving much behavior to the front end, such architectures remove the need for the traditional 'always on' server system sitting behind an application.”

Websites Still Under Siege After 'Drupalgeddon' Redux

Nearly two months after critical Drupal fixes were released, security firm Malwarebytes says it is still finding dozens of unpatched websites that have been exploited to host cryptocurrency miners or in other cases redirect to malware (see Cryptocurrency Miners Exploit Widespread Drupal Flaw). The problems stem from two critical vulnerabilities in Drupal, both of which are remotely executable. That's a perfect combination for attackers: Give them a widely used piece of software such as Drupal, as well as known vulnerabilities that can be easily and remotely exploited without even needing to attempt to trick a victim into taking any action. The first flaw, CVE-2018-7600, was revealed March 28, and the second, CVE-2018-7602, on April 25. The vulnerabilities were so severe that they were dubbed Drupalgeddon 2 and Drupalgeddon 3. Although patches have been available since the vulnerabilities were publicized, attackers are still taking advantage of websites that haven't been upgraded. "Rolling out a CMS is the easy part," writes Jerome Segura, lead malware intelligence analyst with Malwarebytes, in a blog post. "Maintaining it is where most problems occur due to lack of knowledge, fear of breaking something and, of course, costs."

Effective IoT Security Requires Machine Learning

Artificial intelligence (AI) is a branch of computer science that focuses on the theory and development of computer systems that are capable of performing tasks that normally require human intelligence, such as visual perception and decision-making. Machine Learning is a subset of AI that focuses on the practice of using algorithms to parse data, learn from it, and then make a prediction about something. In contrast to a static algorithm, a critical aspect of machine learning is that the machine is “trained” using large amounts of data and algorithms that give the machine the ability to continually learn how to perform a given task. Tools based on machine learning are necessary to supplement the existing set of security tools. These new tools help organizations identify and mitigate the emerging generation of security breaches that are designed to leverage both the legacy and evolving attack surfaces to evade the enterprise’s traditional defenses. When evaluating security tools based on machine learning, there are three key concepts that IT organizations should keep in mind.

Quote for the day:

"There are plenty of difficult obstacles in your path. Don't allow yourself to become one of them." -- Ralph Marston

Daily Tech Digest - May 21, 2018

Smaller micro datacentres utilising a modular approach are making it possible for data processing facilities to be based either on-site or as close to the location as possible. This edge computing is essential as it gives manufacturers the capabilities to run real-time analytics, rather than vast volumes of data needing to be shipped all the way to the cloud and back for processing. Modular datacentres give operators the scope to ‘pay as you grow’ as and when the time comes for expansion. The rise of modular UPS provides similar benefits in terms of power protection requirements too. And with all the additional revenues a datacentre could make from Industry 4.0, the need for a reliable and robust continuous supply of electricity becomes even more imperative. Transformerless modular UPSs deliver higher power density in less space, run far more efficiently at all power loads so waste less energy, and also don’t need as much energy-intensive air conditioning to keep them cool. Any data centre manager planning to take advantage of manufacturers’ growing data demands would be wise to review their current power protection capabilities.

Angular Application Generator - an Architecture Overview

Most of the time, tools like Angular CLI and Yeoman are helpful if we decide to simplify the source generation by following a very strict standard and pragmatic way to develop an application. In the case of Angular CLI, it can be useful when doing scaffolding. Simultaneously, the Yeoman generator delivers very interesting generators for Angular and brings a great deal of flexibility because you can write your own generator in order to fill your needs. Both Yeoman and Angular CLI enrich the software development through their ecosystems: delivering the bulk of defined templates to build up the initial software skeleton. On the other hand, it could be hard to only use templating. Sometimes standardized rules can be translated into a couple of templates, which would be useful in many scenarios. But that wouldn’t be the case when trying to automate very different forms with many variations, layouts and fields, in which countless combinations of templates could be produced. It would bring only headache and long-term issues, because it reduces the maintainability and incurs technical debt.

Asigra evolves backup/recovery to address security, compliance needs

cloud backup, cloud storage, Asigra
Asigra addresses the Attack-Loop problem by embedding multiple malware detection engines into its backup stream as well as the recovery stream. As the backups happen, these engines are looking for embedded code and use other techniques to catch the malware, quarantine it, and notify the customer to make sure malware isn’t unwittingly being carried over to the backup repository. On the flip side, if the malware did get into the backup repositories at some point in the past, the malware engines conduct an inspection as the data is being restored to prevent re-infection. Asigra also has added the ability for customers to change their backup repository name so that it’s a moving target for viruses that would seek it out to delete the data. In addition, Asigra has implemented multi-factor authentication in order to delete data. An administrator must first authenticate himself to the system to delete data, and even then the data goes into a temporary environment that is time-delayed for the actual permanent deletion. This helps to assure that malware can’t immediately delete the data. These new capabilities make it more difficult for the bad guys to render the data protection solution useless and make it more likely that a customer can recover from an attack and not have to pay the ransom.

Moving away from analogue health in a digital world

Technology has an important role to play in the delivery of world-class healthcare. Secure information about a patient should flow through healthcare systems seamlessly. The quality, cost and availability of healthcare services depends on timely access to secure and accurate information by authorised caregivers. Interoperability cannot be solved by any one organisation in isolation. What is needed is for providers, innovators, payers, governing bodies and standards development organisations to come together to apply innovative and agile solutions to the problems that healthcare presents. We can clearly see the pressure that healthcare organisations are under at the moment. Researchers at the universities of Cambridge, Bristol, and Utah found that a staggering 14 million people in England now have two or more long-term conditions – putting a major strain on the UK’s healthcare services. To add to this, recent figures show that more than half of healthcare professionals believe the NHS’s IT systems are not fit for purpose.

Shadow IT is a Good Thing for IT Organizations

The move to shadow IT is a good thing for IT. Why? It is a wake-up call. It provides a clear message that IT is not meeting the requirements of the business. IT leaders need to rethink how to transform the IT organization to better serve the business and get ahead of the requirements. There is a significant opportunity for IT play a leading role in business today. However, it goes beyond just the nuts and bolts of support and technology. It requires IT to get more involved in understanding how business units operate and proactively seek opportunities to advance their objectives. It requires IT to reach beyond the cultural norms that have been built over the past 10, 20, 30 years. A new type of IT organization is required. A fresh coat of paint won’t cut it. Change is hard, but the opportunities are significant. This is more of a story about moving from a reactive state to a proactive state for IT. It does require a significant change in the way IT operates for many. That includes both internally within the IT organization and externally in the non-IT organizations. The opportunities can radically transform the value IT brings to driving the business forward.

Fintech is disrupting big banks, but here’s what it still needs to learn from them

Fintech is disrupting big banks, but hereĆ¢€™s what it still needs to learn from them
Though it’s definitely possible to grow while managing risk intelligently, it’s also true that pressure to match the “hockey-stick” growth curves of pure tech startups can lead fintechs down a dangerous path. Startups should avoid the example of Renaud Laplanche, former CEO of peer-to-peer lender Lending Club, who was forced to resign in 2016 after selling loans to an investor that violated that investor’s business practices, among other accusations of malfeasance. It’s not just financial risk that they may manage badly: the sexual harassment scandal that recently rocked fintech unicorn SoFi shows that other types of risky behavior can impact bottom lines, too. While it might be common for pure tech startups to ask forgiveness, not permission, when it comes to the tactics they use to expand, fintechs should be aware that they’re playing in a different, more risk-sensitive space. Here again, they can learn from banks — who will also, coincidentally, look for sound risk management practices in all their partners. Since the 2008 crisis, financial institutions have increasingly taken a more holistic approach to risk as the role of the chief risk officer (CRO) has broadened.

The Banking Industry Sorely Underestimates The Impact of Digital Disruption

According to McKinsey, many organizations underestimate the increasing momentum of digitization. This includes the speed of technological changes, resultant behavioral changes and the scale of disruption. “Many companies are still locked into strategy-development processes that churn along on annual cycles,” states McKinsey. “Only 8% of companies surveyed said their current business model would remain economically viable if their industry keeps digitizing at its current course and speed.” Most importantly, McKinsey found that most organizations also underestimate the work that is needed to transform an organization for tomorrow’s reality. Much more than developing a better mobile app, organizations need to transform all components of an organization for a digital universe. If this is not done successfully, an organization risks being either irrelevant to the consumer or non-competitive in the marketplace … or both. Complacency in the banking industry can be partially blamed on the fact that digitization of banking has only just begun to transform the industry. No industry has been transformed entirely, with banking just beginning to realize core changes.

Why tech firms will be regulated like banks in the future

First, we have become addicted to technology. We live our lives staring at our devices rather than talking to each other or watching where we are going. It’s not good for us and, according to Pew Research, is responsible for more and more suicides, particularly amongst the young. Pew analysis found that the generation of teens they call the “iGen” – those born after 1995 – is much more likely to experience mental health issues than their millennial predecessors. Second is privacy. Facebook and other internet giants are abusing our privacy rights in order to generate ad revenues, as demonstrated by Cambridge Analytica, but they’re not the only one. Google, Alibaba, Tencent, Amazon and more are all making bucks from analyzing our digital footprints and we let them because we’re enjoying it, as I blogged about recently. Third is that the power of these firms is too much. When six firms – Google (Alphabet), Amazon, Facebook, Tencent, Alibaba and Baidu – have almost all the information on all the citizens of the world held digitally, it creates a backlash and a fear. I’ve thought about this much and believe that it will drive a new decentralized internet

Ericsson and SoftBank deploy machine learning radio network

"SoftBank was able to automate the process for radio access network design with Ericsson's service. Big data analytics was applied to a cluster of 2,000 radio cells, and data was analysed for the optimal configuration." Head of Managed Services Peter Laurin said Ericsson is investing heavily in machine learning technology for the telco industry, citing strong demand across carriers for automated networks.To support this, Ericsson is running an Artificial Intelligence Accelerator Lab in Japan and Sweden for its Network Design and Optimization teams to develop use cases. In introducing its suite of network services for "massive Internet of Things" (IoT) applications in July last year, Ericsson had also added automated machine learning to its Network Operations Centers in a bid to improve the efficiency and bring down the cost of the management and operation of networks. According to SoftBank's Tokai Network Technology Department radio technology section manager Ryo Manda, SoftBank is now collaborating with Ericsson on rolling out the solution to other regions. "We applied Ericsson's service on dense urban clusters with multi-band complexity in the Tokai region," Manda  said.

Models and Their Interfaces in C# API Design

A real data model is deterministically testable. Which is to say, it is composed only of other deterministically testable data types until you get down to primitives. This necessarily means the data model cannot have any external dependencies at runtime. That last clause is important. If a class is coupled to the DAL at run time, then it isn’t a data model. Even if you “decouple” the class at compile time using an IRepository interface, you haven’t eliminated the runtime issues associated with external dependencies. When considering what is or isn’t a data model, be careful about “live entities”. In order to support lazy loading, entities that come from an ORM often include a reference back to an open database context. This puts us back into the realm of non-deterministic behavior, where in the behavior changes depending on the state of the context and how the object was created. To put it another way, all methods of a data model should be predicable based solely on the values of its properties. ... Parent and child objects often need to communicate with each other. When done incorrectly, this can lead to tightly cross-coupled code that is hard to understand.

Quote for the day:
"Strategy Execution is the responsibility that makes or breaks executives" -- Alan Branche and Sam Bodley

Daily Tech Digest - May 20, 2018

Understanding how Design Thinking, Lean and Agile Work Together

Lean started out as a response to scientific management practices in manufacturing. Organisations sought efficiency through processes, rules, and procedures and management was mostly about control. But in modern business, control is a falsehood. Things are too complex, too unpredictable, and too dynamic to be controlled. Lean offers a different mindset for managing any system of work. It’s fundamentally about exploring uncertainty, making decisions by experimenting and learning and empowering people who are closest to the work to decide how best to achieve desired outcomes. Lean says be adaptive, not predictive. Agile is related to Lean. The differences are mostly about what these mindsets are applied to, and how. In conditions of high uncertainty, Agile offers ways to build software that is dynamic and can adapt to change. This isn’t just about pivoting. It’s also about scaling and evolving solutions over time. If we accept that today’s solution will be different from tomorrow’s, then we should focus on meeting our immediate needs in a way that doesn’t constrain our ability to respond when things change later. The heart of Agile is adapting gracefully to changing needs with software.

How to write a GDPR-compliant data subject access request procedure

Recital 63 of the GDPR states, “a data subject should have the right of access to personal data which have been collected concerning him or her, and to exercise that right easily and at reasonable intervals, in order to be aware of, and verify, the lawfulness of the processing.” The procedure for making and responding to subject access requests remains similar to most current data protection laws, but there are some key changes you should be aware of under the GDPR: In most circumstances, the information requested must be provided free of charge. Organisations are permitted to charge a “reasonable fee” when a request is manifestly unfounded, excessive or repetitive. This fee must be based on the administrative cost of providing the information.
Information must be provided without delay and within a month. Where requests are complex or numerous, organisations are permitted to extend the deadline to three months. However, they must still respond to the request within a month to explain why the extension is necessary.

New Robot Swims With No Motor and No Battery

New Robot Swims With No Motor and No Battery
This polymer’s movements activate a switch in the robot's body attached to a paddle. Once the paddle is triggered it behaves like a rowing paddle pushing the small robot forward. The polymer strips have the unique ability to behave differently according to their thickness. Therefore, the engineers used a variety of strip sizes to generate different responses at different times resulting in robots that can swim at different speeds and directions. The engineers were even successful in making the mini robots drop off a payload. "Combining simple motions together, we were able to embed programming into the material to carry out a sequence of complex behaviors," says Caltech postdoctoral scholar Osama R. Bilal, co-first author of the paper.  The team is now researching adding other functionalities to the robots such as responding to environmental cues like pH or salinity. They are also looking into redesign the devices to be self-resetting according to temperature shifts, enabling the robots to swim forever. The engineers have some very ambitious potential projects for their little machines such as delivering drugs and even containing chemical spills. This week was an important one of the autonomy of small robots.

The Fintech Files: Do we expect too much from AI?

While many think AI is the future, lots of financial institutions are using it to tidy up the past. Innovation labs and centres of excellence sound impressive, but behind the grand declarations, experts know that with AI the output is only as good as the input: you need solid data for it to work. And that’s the main problem for so many institutions. Internal data is often messy. For now, Nomura’s AI lab is designed to tackle just that; dealing with historical data as well as making sure all live data is stored consistently so it can be used for analytics in the future. Something else that is holding up progress: Unlike collaborations in blockchain, we haven’t seen many banks teaming up on AI yet. Hardly surprising, given its potential competitive advantages. But it makes it hard to gauge what City firms have achieved so far, Mohideen said. The AI hype started off with visions of robots taking over the trading floor. At the moment, most of them are still just doing admin.

GDPR in real life: Fear, uncertainty, and doubt

"Most industries face an ambitious regulatory agenda, and have been doing so for years. When considering GDPR two things happened: Firstly, it was de-prioritized in relation with other topics with an earlier deadline, secondly organizations have been -- across the board -- underestimating the impact of the new legislation on processes and systems. When GDPR was eventually picked up in a structural way, it has become increasingly clear to most organizations that, although they will be able to put into place policies and processes, the long tail will be in the implementation of various aspects into the (legacy) IT landscape. This is bound to be a large part of the post May 25 backlog for most of them. Strategically not complying should be a thing of the past, where previous legislation would in the worst case fine relatively small amounts. GDPR more fundamentally will become part of the license to operate with serious implications, both monetary as well as reputational. The biggest fear for heads of communication and board members alike is becoming the showcase in the media in the coming weeks, months."

Optimistic about AI and the future of work

Despite what you may have heard elsewhere, the future of work in a world with artificial intelligence (AI) is not all doom and gloom. And thanks to a research-backed book from Malcolm Frank, What to Do When Machines Do Everything, we have data to prove it. Also, thanks to new educational approaches, we are better equipped to prepare students and misplaced workers for a future with AI. All of these topics were covered at Cornell’s Digital Transformation Summit, where my colleague Radhika Kulkarni and I spoke alongside Frank and some of our country’s top educational leaders. Frank, Executive VP of Strategy and Marketing at Cognizant, says we’re experiencing the fourth industrial revolution. He anticipates that the percentage of job loss from AI will correspond with job loss rates during other periods of automation throughout history, including automation through looms, steam engines and assembly lines. Fundamentally, workforce changes from AI will be like those during the industrial revolution and the introduction of the assembly line. About 12 percent of jobs will be lost. Around 75 percent of jobs will be augmented. And there will be new jobs created.

Gatwick Airport embraces IoT and Machine Learning

Experts are good at finding great ways to utilize limited resources, which is particularly important at Gatwick. When aided by IT however, they can do even more. Machine-learning can detect busy areas in the airport through smartphones and tracking these results over the long term can provide key insights into optimizing day-to-day operations. When making decisions, Gatwick’s management will be aided with powerful data that can provide insights not attainable with more traditional technologies, and new the IT infrastructure will be a key to this analysis. Facial recognition technology will boost security as well as track late passengers, and personalized services based on smartphones or wearable technology can provide valuable updates to travellers on a personal level. Dealing with lost baggage can be a time-consuming and often stressful process. Armed with its new IT infrastructure, Gatwick and its airline operators are poised to offer a better alternative. Being able to track luggage and its owners creates new opportunities for simplifying the check-in and baggage claim process, helping get travellers in and out the the airport in a prompt and seamless manner.

What Is The Future Of Cryptocurrencies And Blockchain?

The cryptocurrency market is highly volatile. More than anything, currently they are serving the investors a speculative instrument. There are huge risks involved and many nations have been restraining it in some way or the other. Many scholars and industrial leaders have stated that Bitcoin and many other cryptocurrencies are some sort of Ponzi schemes. So when asked about what the future of cryptocurrencies looks like, Simon is quite optimistic about it. Rather than focusing upon their day to day movements, if one looks at the big picture, then it is quite evident that these currencies serve some particular function. ... Another important point of observation is that even though more and more nations are banning some or the other cryptocurrency, especially in Southeast Asia, more money has already been invested in the ICOs in the first four months of 2018 than the whole of 2017. This clearly indicates that more and more institutional money has been moving to the crypto world.

Governance and culture will differentiate banks in 2018

It is probably no longer enough for board members to just have a simple understanding of regulatory updates like VAT or IFRS 9. They also need to better understand how emerging technologies such as fintech, regulatory technology (regtech), blockchain, can contribute to disrupt future banking operations and the financial sector in general. Boards of directors at banks still have ample opportunities to reshape their business models, and by leveraging technology, they can develop products that create further competitive advantage. To strengthen governance by the board even further, banks must also develop a skills matrix that includes knowledge and experience in financial reporting and internal controls, strategic planning, risk management, and corporate governance standards. The skills matrix, which should ideally be completed by the board, can include specialist requirements for capital markets, risk management, audit, finance, regulatory compliance and information technology (IT). Some banks in the UAE have started to appoint directors in accordance with the expertise requirements in the matrix.

This cryptocurrency phishing attack uses new trick to drain wallets

Researchers note that MyEtherWallet is an appealing target for attackers because it is simple to use, but its lack of security compared to other banks and exchanges make it a prominent target for attack. Once the user hits a MEWKit page, the phishing attack gets underway with credentials including logins and the private key of the wallet being logged by the attackers. After that, the crooks look to drain accounts when the victim decrypts their wallet. The scam uses scripts which automatically create the fund transfer by pressing the buttons like a legitimate user would, all while the activity remains hidden -- it's the first time an attack has been seen to use this automated tactic. The back end of MEWKit allows the attackers to monitor how much Ethereum has been collected, as well as keeping a record of private user keys and passwords which can potentially be used for further attacks. Those behind MEWKit appear to have been active for some time and have carried out some sophisticated campaigns. Researchers say MewKit demonstrates a "new dedicated effort from threat actors to pursue cryptocurrency"

Quote for the day:

"Do not listen to those who weep and complain, for their disease is contagious." -- Og Mandino

Daily Tech Digest - May 18, 2018

businessman bridges gap
“Elicitation of requirements and using those requirements to get IT onboard and understand what the client really wants, that’s one of the biggest responsibilities for BAs. They have to work as a product owner, even though the business is the product owner,” Gregory says. “[They need to ask:] What do the systems need to do, how do they do it, who do we need to get input from, and how do we get everyone to agree on what we need to do before we go and do it? The BA’s life revolves around defining requirements and prioritizing requirements and getting feedback and approval on requirements,” says Jeffrey Hammond, vice president and principal analyst at Forrester Research. The role of a business analyst is constantly evolving and changing – especially as companies rely more on data to advise business operations. Every company has different issues that a business analyst can address, whether it’s dealing with outdated legacy systems, changing technologies, broken processes, poor client or customer satisfaction or siloed large organizations.

Why AI is the perfect software testing assistant

Software testers are highly analytical, creative problem solvers. To identify hidden defects and areas where users might get frustrated, they must ask what others haven't asked and see what others don't see. But the analytical process takes time, and it isn't always as efficient as today's businesses and the users of their software demand. Artificial intelligence (AI), and its ability to search data sets for golden nuggets, could really come in handy here. An AI tool could quickly locate tests that have already been written to cover a particular scenario or new line of code. The system could even tell testers which test cases are most appropriate for the requirement. Over time, an AI tool could even pinpoint what might be causing the bugs that those tests find, based on past data. When combined with testers' wealth of knowledge about the product and its users, AI has the potential to significantly increase testing efficiency. ... We are beginning to see a few AI-enhanced testing tools hit the market now; initial capabilities include highlighting areas of risk that need further testing or that weren't covered at all. There will be many more advanced tools released in the coming months and years.

Blockchain technology lacks enough use cases to be disruptive, says Worldpay

A lack of strong use cases for blockchain is preventing the technology from disrupting the financial services industry, according to Worldpay. The payment company’s head of technology operations, Jason Scott-Taggart, said the organisation had not ruled out using blockchain in future, but the technology still has some way to go. “You’d be surprised, but in payments blockchain is not as disruptive as people assume it is. There’s not a lot of demand for cryptocurrencies, and blockchain as a technology is not something we have seen a good application for in what we do yet,” he told Computer Weekly in an interview at the ServiceNow Knowledge 18 conference. His view echoes research from Gartner, which found just 1% of CIOs are currently undertaking blockchain projects and 8% plan to start one in the short term. The analyst firm’s vice-president, David Furlonger, said the technology was “massively hyped” and warned “rushing into blockchain deployments could lead organisations to significant problems of failed innovation, wasted investment [and] rash decisions”.

Improve the rapid application development model for deployment readiness

An increasing number of enterprises adapt rapid application development tools rather than reworking their DevOps toolchain. Kubernetes, Marathon and other container orchestration platforms easily combine with continuous integration tools such as Jenkins to make every stage of rapid development, from unit testing through production, part of an explicit flow. The move from idea to prototype is defined in rapid development terms, using rapid development tools. Jenkins, Buildbot, CruiseControl and similar tools frame production as a stage of rapid or continuous development. At each stage, they link to container orchestration for deployment. Simply hosting application code in containers does not guarantee that the orchestration practices for each stage will be comparable, but it does organize the process overall. Containers, and a single orchestration tool, provide commonality across all stages of rapid application development to ensure that every stage is tested, including the transition to production.. The rapid application development model, in both setups, is a string of testing and integration phases linked together.

Adware bundle makes Chrome invisible to launch cryptojacking attacks

Known as cryptojacking, this practice involves the use of often-legitimate mining scripts which are deployed on browsers without user consent, before funneling the proceeds to mining pools controlled by threat actors. According to the publication, the bundle creates a Windows autorun which launches the Google Chrome browser -- in a way which is invisible. By using specific code to launch the browser, the software forces Chrome to launch in an invisible, headless state. The browser then connects to a mining page whenever the user logs into Windows. This page launches the CoinCube mining script that steals processing power to mine Monero. CPU usage may spike to up to 80 percent, and while victims may notice their PCs are slow, it could be a very long time before the software is uncovered and removed -- or users may simply blame Chrome as the oddity. The researcher opened the website page responsible for the script in a standard browser window and came across an interesting element of the script; the page masquerades as a Cloudflare anti-DDoS page.

Telegrab: Russian malware hijacks Telegram sessions

Cisco Talos researchers Vitor Ventura and Azim Khodjibaev dubbed the malware Telegrab. They analyzed two versions of it. The first one, discovered on April 4, 2018, only stole browser credentials, cookies, and all text files it can find on the system. The second one, spotted less than a week later, is also capable of collecting Telegram’s desktop cache and key files and login information for the Steam website. To steal Telegram cache and key files, the malware is not taking advantage of software flaws. The malware is capable of targeting only the desktop version of the popular messenger because it does not support Secret Chats and does not have the auto-logout feature active by default. This means that the attacker can use those stolen files to access the victim’s Telegram session (if the session is open), contacts and previous chats. Telegrab is distributed via a variety of downloaders, and it checks if the victim’s IP address is part of a list that includes Chinese and Russian IP addresses, along with those of anonymity services in other countries. If it is, it will exit.

Blockchain will be the killer app for supply chain management in 2018

blockchain maersk ibm
Private or "permissioned" blockchains can be created within a company's four walls or between trusted partners and centrally administered while retaining control over who has access to information on the network. Blockchain can also be used between business partners, such as a cloud vendor, a financial services provider and its clients. Bill Fearnley, Jr., research director for IDC's Worldwide Blockchain Strategies, recently returned from visiting company clients in China where he found "everybody wanted to talk about supply chain. "If you build a blockchain ledger within [a single company] that has a certain value," Fearnley said. "The real value for blockchain is when you use distributed electronic ledgers and data to connect with suppliers, customers and intermediaries." One major challenge with supply chain management today involves trade finance record keeping, because a lot of trade finance record keeping is still based on inefficient systems: including faxes, spreadsheets, emails, phone calls and paper.

Zara concept store greets shoppers with robots and holograms

At Zara’s new flagship store in London, shoppers can swipe garments along a floor-to-ceiling mirror to see a hologram-style image of what they’d look like as part of a full outfit. Robot arms get garments into shoppers’ hands at online-order collection points. iPad-wielding assistants also help customers in the store order their sizes online, so they can pick them up later. “Customers don’t differentiate between ordering online or in a store,” spokesman Jesus Echevarria Hernandez said. “You need to facilitate that as best as you can.” The store, which opened Thursday, shows how retailers are increasingly blending online and bricks-and-mortar shopping in a bid to keep up with the might of Amazon.com Inc. Inditex SA, the Spanish company that owns Zara, calls it an example of the technologies it will implement around the world. ... Amazon is moving the other way, building out its physical retail presence. Not only has it acquired grocer Whole Foods Market Inc., it has opened Amazon Go convenience stores, which use artificial intelligence and video cameras in lieu of checkouts, in several U.S. cities.

Icinga Enterprise-Grade Open-Source Network Monitoring That Scales

analytics network monitoring
Icinga runs on most of the popular Linux distros and the vendor provides detailed installation instructions for Ubuntu, Debian, Red Hat (including CentOS and Fedora) and SUSE/SLES. Icinga does not publish specific hardware requirements, but our installation ran well on a quad-core processor with 4 GB RAM and this is probably be a good starting point for a basic installation. ... As with most monitoring applications, storage is an important variable that largely depends on the number of hosts and services monitored and how often information is written to the log. With too little storage, the logs can easily fill up and freeze the system. We were able to quickly install Icinga on Ubuntu 16.04 LTS with just a few simple commands at the prompt. The first step was to download the necessary files to the local repository, and then install the actual Icinga application. Icinga can be used to monitor the availability of hosts and services from switches and routers as well as a variety of network services like HTTP, SMTP and SSH.

CISO soft skills in demand as position evolves into leadership role

You need to be able to understand what engineering is trying to do and what their goals are, what marketing and procurement are doing, what the customer is trying to do and what their goals are. If you can't empathize with what their goals and challenges are, you can't influence. So much flows from that: Your communication skills and communication style will flow from empathy. You also need to be understanding of what we call the data subject -- the consumer who doesn't understand what's happening to their data -- and having empathy for them, as well as empathy with all the stakeholders. It's empathizing with everybody and making the wisest decision to push for the best outcome you can. ... It's important for at least two different reasons. One, from a practical perspective, I've talked a lot about the skills gap. If we're blocking 50% of the planet from joining this career path, we're really contributing to our biggest challenge. Then the other part: Women across the globe are economically oppressed, and information security is a lucrative field. I want to get women into the information security field so they can be financially independent and make a good living.

Quote for the day:

"Leadership - leadership is about taking responsibility, not making excuses." -- Mitt Romney