Daily Tech Digest - May 23, 2018

12 Frequently Asked Questions on Deep Learning


Feature engineering is a process of putting domain knowledge into the creation of feature extractors to reduce the complexity of the data and make patterns more visible to learning algorithms to work. This process is difficult and expensive in terms of time and expertise. In Machine learning, most of the applied features need to be identified by an expert and then hand-coded as per the domain and data type. For example, features can be pixel values, shape, textures, position and orientation. The performance of most of the Machine Learning algorithm depends on how accurately the features are identified and extracted. Deep learning algorithms try to learn high-level features from data. This is a very distinctive part of Deep Learning and a major step ahead of traditional Machine Learning. Therefore, deep learning reduces the task of developing new feature extractor for every problem. Like, Convolutional NN will try to learn low-level features such as edges and lines in early layers then parts of faces of people and then high-level representation of a face.



No CS degree? For skilled developers, 75% of hiring managers don't care

istock-849858410.jpg
Strong work experience is the most important qualification that recruiters and hiring managers look for when filling tech positions, the report found. However, resume-bolstering factors like degree, prestige, and skill keywords are not accurate predictors of future job success, according to the report. Instead, hiring managers and recruiters are looking to indicators that demonstrate ability, such as previous work experience, years of work, and personal projects, which get closer at measuring a candidate's skills. ... Hiring managers' top three measures of success in recruiting were quality of candidate, future performance success, and employee retention, the report found. Failing to align on skills and expectations for candidates are two of the top hurdles facing hiring managers when it comes to working with recruiters, the report found. To solve this problem, recruiters should regularly check in with hiring managers to understand the nuances of the technical skills hiring managers are looking for in each open role. For example, what are the crucial must-have skills for a fullstack developer versus a back-end developer? This can help narrow down the pool of qualified candidates.


Doctors are using AI to see how diseases change our cells


This model can predict where these organelles will be found in any new cell, so long as it’s provided with an image from a microscope. The researchers also used AI to create a probabilistic model that takes its best guess at where one might expect to find those same organelles if provided with a cell’s size and shape, along with the location of its nucleus. These models are useful for doctors and scientists because they provide a close-up look at the effects of cancer and other diseases on individual cells. By feeding the AI with data and images of cancer cells, they can get a more complete picture of how the cell, and its individual components, are affected. And that can indicate how doctors can help each patient with treatment tailored to their disease. The team from the Allen Institute hopes their tools can help democratize medical research, improving healthcare in underserved areas. So the researchers are working to improve them, creating more complete models, according to NPR. They hope to have a broader database, full of models of more cells, available over the next few months.


Everything you need to know about the new general data protection regulations

istock-gdpr-concept-image.jpg
GDPR applies to any organisation operating within the EU, as well as any organisations outside of the EU which offer goods or services to customers or businesses in the EU. That ultimately means that almost every major corporation in the world will need to be ready when GDPR comes into effect, and must start working on their GDPR compliance strategy. There are two different types of data-handlers the legislation applies to: 'processors' and 'controllers'. The definitions of each are laid out in Article 4 of the General Data Protection Regulation. A controller is "person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of processing of personal data", while the processor is "person, public authority, agency or other body which processes personal data on behalf of the controller". If you are currently subject to the UK's Data Protection Act, for example, it's likely you will have to look at GDPR compliance too.


You’ve probably been hiring the wrong kind of data scientist

A lot of people like to call themselves data scientists because they’re using point-and-click tools, like Tableau and Excel, to perform data analysis and visualization in order to gain business insights. ... The real challenge comes from handling large datasets, including textual or other unstructured raw data, and doing so in real time–all of which requires programmatic execution. That is, coding. Indeed, many of the gains in AI and data science are thanks to what researchers are calling the “Unreasonable Effectiveness of Data”–being able to learn programmatically from astronomical data sets. This work is also highly nuanced and detailed, and doing the wrangling and cleaning properly is crucial for developing effective machine intelligence later on. Point-and-click software just isn’t sophisticated enough to substitute for good programming skills (after all, you can perform machine learning with Excel). This goes beyond just the usual mantra of “garbage in, garbage out.” Employers are trying to manage turbocharged public relations on social media while staying in regulators’ good graces despite that enhanced scrutiny.


A Simple and Scalable Analytics Pipeline


The core piece of technology I’m using to implement this data pipeline is Google’s DataFlow, which is now integrated with the Apache Beam library. DataFlow tasks define a graph of operations to perform on a collection of events, which can be streaming data sources. This post presents a DataFlow task implemented in Java that streams tracking events from a PubSub topic to a data lake and to BigQuery. An introduction to DataFlow and it’s concepts is available in Google’s documentation. While DataFlow tasks are portable, since they are now based on Apache Beam, this post focuses on how to use DataFlow in conjunction with additional managed services on GCP to build a simple, serverless, and scalable data pipeline. The data pipeline that performs all of this functionality is relatively simple. The pipeline reads messages from PubSub and then transforms the events for persistence: the BigQuery portion of the pipeline converts messages to TableRow objects and streams directly to BigQuery, while the AVRO portion of the pipeline batches events into discrete windows and then saves the events to Google Storage.


7 risk mitigation strategies for the cloud

7 risk mitigation strategies for the cloud
“Cloud services often encourage ‘casual use’ of data; I can collect, search and store anything just about anywhere” is the hook, says John Hodges, vice president of product strategy for AvePoint. “We often see this in systems like Box, DropBox or OneDrive, where there is a real mixed-use danger in how content is stored and shared.” The simple solution? Prohibit services where mixed-use is likely to be a problem. ... Zero trust is an IT security strategy wherein an organization requires every user, system or device inside or outside its perimeter to be verified and validated before connecting to its systems. How can you use a zero trust model to mitigate cloud risk? For Insurity, an organization that specializes in property and casualty insurance services and software, a zero trust approach means restricting access tightly. “We provide logical access to the minimum set of users with a minimum set of rights and privileges in line with job function requirements. This control is audited internally by our Enterprise Security team and externally as part of our annual SOC audit,” says Jonathan Victor, CIO of Insurity. Regularly examine user access levels and ask yourself whether they make sense.


What Is Microservices? An Introduction to Microservice Architecture


Now, let us look at a use-case to get a better understanding of microservices. Let's take a classic use case of a shopping cart application. When you open a shopping cart application, all you see is just a website. But, behind the scenes, the shopping cart application has a service for accepting payments, a service for customer services and so on. Assume that developers of this application have created it in a monolithic framework. So, all the features are put together in a single code base and are under a single underlying database. Now, let's suppose that there is a new brand coming up in the market and developers want to put all the details of the upcoming brand in this application. Then, they not only have to rework the service for new labels, but they also have to reframe the complete system and deploy it accordingly. To avoid such challenges developers of this application decided to shift their application from a monolithic architecture to microservices.


Cybercriminals Battle Against Banks' Incident Response

Persistent attackers aren't backing down when banks detect them and launch their incident response processes, either. One in four bank CISOs in the Carbon Black study say their institution faced attackers fighting back when they got spotted, trying to deter defenses and the investigation into the attack. "They are leaving wipers or destructive malware to inhibit [IR], deleting logs, and inhibiting the capacity of forensics tools," for example, says Tom Kellermann, chief cybersecurity officer at Carbon Black. "Sometimes they are using DDoS to create smokescreens during events." These counter-IR activities are forcing banks to be be more proactive and aggressive as well, he says. "They need to have threat hunting teams. You can't just rely on telemetry and alerts." While banks are often relying on their IR playbooks, attackers have the freedom to freelance and counter IR activities. They're changing their malware code on the fly when it gets detected, deleting activity logs to hide their tracks, and even targeting bank security analysts and engineers to help their cause.


Certain types of content make for irresistible phishes

It used to be that fear, urgency and curiosity were the top emotional motivators behind successful phishes. Now they’ve been replaced by entertainment, social media and reward/recognition. According to the company, simulated eCards, internal promotion/reward programs, and a number of financial and compliance scenarios (e.g., phishes with “Financial Information Review” or “Compliance Training” in the subject line) are most successful at getting users to click. Employees should be trained to be aware of their emotional reactions to emails and see them as phishing triggers. “When creating simulations, remember consumer scams—those phony Netflix or LinkedIn emails sent to busy employees, who are glad to switch gears and click on something fun,” the company notes. “Understand the dynamics of entertainment or social phishing (think uncritical social acceptance and shortened URLs).” And when it comes to emails promising rewards, employees should be taught to be critical of rewards and deals that sound too good to be true.



Quote for the day:


"You don't have to hold a position in order to be a leader." -- Anthony J. D'Angelo


Daily Tech Digest - May 22, 2018

Smart Homes of Tomorrow – This Is Why We Can’t Have Nice Things


What new design concepts are needed to address these emerging technologies and risks presented by the ever-connected smart home? It not about informing everyone that everything can be hacked. It’s about creating awareness and helping others understand the risks involved with modern technology. It’s about helping the builders and designers understand the technological solutions required by their clients and how to implement them correctly, so they too can educate the user on how to maintain their system safely and securely. Consumers need to be aware of their devices’ remote and local environment, and how their data is collected and stored. They also need to be aware of how their personal devices and appliances can be affected by outages outside of their control, like a DDoS attack on a cloud environment or something as simple as a power outage. Finally, we as a community need to put pressure on the manufacturers to produce secure devices with clear plans on how to patch and mitigate future vulnerabilities. Manufacturers also have to begin working together to insure user data and integrate it in a crowded environment of smart links and physical devices, ultimately preventing remote access.



Legit tools exploited in bank heists


Native tools such as PowerShell and Windows Management Instrumentation (WMI) grant users exceptional rights and privileges to carry out the most basic commands across a network. These “non-malware” or fileless attacks account for more than 50% of successful breaches, the report said, with attackers using existing software, allowed applications and authorised protocols to carry out malicious activities. In this way, attackers are able to gain control of computers without downloading any malicious files and therefore remain unnoticed by malware-detection security systems.  ... Finally, PowerShell is used to connect to a command and control server to download a malicious PowerShell script designed to find sensitive data and send it to the attacker, all without downloading any malware. Almost every Carbon Black customer (97%) was targeted by a non-malware attack during each of the past two years, but the report notes that awareness of malicious usage for tools such as PowerShell has never been higher, with 90% of CISOs reporting seeing an attempted attack using PowerShell.


4 best practices for tapping the potential of prescriptive analytics

One potential value of prescriptive analytics is that you don’t necessarily need a ton of data to reach the best decision or outcome. Prescriptive analytics focuses the question you’re asking, and the decisions you’re trying to reach, to one tangible answer using a smart model of your business that is not dependent on the amount of data (how much or how little) that you have. Predictive techniques and functionalities can be great at identifying a multitude of options through statistical modeling and forecasting, as long as you have the relevant data—but that’s precisely the problem. It’s difficult to process and synthesize numerous options and the nuanced differences among them to determine what you should actually do. How can you be sure that you’re making the best decision? How can you be sure of the impact it will have on your company? Prescriptive analytics can involve hundreds of thousands of tradeoffs associated with a question you might have, and it uses the available data to identify the best decision and impact relative to the goal you’re trying to achieve. 


Technological advancements are changing the landscape for new and existing businesses alike


With Brexit looming over both small and large businesses operating within the UK, maintaining a competitive price point despite new tariffs will be a tough challenge to overcome. However, with new technological solutions opening up greater levels of efficiency and productivity this task should not appear as daunting as many believe. Tech companies will continue to innovate new solutions for both current and future business issues, finding new ways to improve efficiency within the business environment. Voice recognition technology such as iBOB, for instance, is now freeing up valuable time for business owners and receptionists. It is reducing the need for customer service interactions on the phone, such as appointment bookings, which is making the human impact much more impactful and is allowing small businesses to focus their resources on more profitable aspects of their business. Employing affordable technological solutions, with an aim to focus added work hours on tasks more closely related to the bottom line, will allow existing businesses to maintain their competitive position within the market.


Cultural change more important than tech when implementing big data

“It is not your data or my data; it is the firm’s data, and the value you create for the business is from that data,” Tewary explained. “It is a transformation. It’s changing the people culture aspect, so there’s a lot of education. You know, you have to be an evangelist. You wear multiple hats to show people the value.” For Tewary at Verizon, finding advocates within the company for sharing big data was crucial. “We found champions,” he said. “For example, finance … was a good champion for us, where we used the data and analytics to really actually launch some very critical initiatives for the firm — asset-backed securities. … That created the momentum.” Dobrin agreed with this strategy of using champions within an enterprise to help lead the way for the entire company. “It’s not just a jump to the top of the ladder, because there’s just a lot of work that’s required to do it. You can do that within a business unit.” While the whole enterprise doesn’t need to get there all at the same time, as other areas of the enterprise begin to see the use of big data and how it can change the game, they will be open to the idea, Dobrin explained.


What is SDN? How software-defined networking changed everything

180507-06-open-vswitch.jpg
To stay competitive in networking and to avoid being obsoleted by history, network equipment vendors have either blazed the trails for SDN, or found themselves adopting SDN reluctantly, perhaps looking a little singed in the process. One vendor clearly in the former camp, not the latter, is Juniper Networks. It plunged into the SDN field during the fateful year of 2012, first by purchasing a firm called Contrail, and then by building it into an open-source virtual appliance ecosystem unto itself: OpenContrail. As the diagram above depicts, OpenContrail serves as a device that provides the routing logic for distributed operating systems that host Docker containers. ... "It's a big part of operating and automating both a virtual and a physical infrastructure. It orchestrates the VNFs [virtual network functions] and puts together the service chains, all the way to the edge and to the core. Contrail uses vRouter and, in a distributed data center infrastructure, reach into any part of the cloud, string up the required VNFs, stitch together the different pieces of the service, and deliver a custom service to a certain vertical, or a group of end customers. It automates that whole process of customizing the services that can be offered, ultimately, to our service provider customers."


How can you categorize consumers who keep remaking themselves?


Perhaps there won’t be a “mass market” for consumer goods anymore; just a mass of individuals who are increasingly difficult to categorize, and who reinvent themselves from moment to moment, from platform to platform. People will still want to gather in groups with like-minded people. But they will find them through technology and data and connect with them based on their shared values and interests rather than practical connections, such as living in the same area. Rather than being defined by markers such as gender, age or location, they will express themselves in ways that are more fluid and flexible. In one of the future worlds we modeled at our hack week in Berlin, these groups – or “tribes” – broke down physical borders and formed their own communities, both real and virtual. They started to pool their purchasing power and demand a different relationship with brands. Today, consumer-facing companies try to tailor offers and discounts that will appeal to individual consumers, based on their purchasing data – with varying degrees of skill and success. In the future, will products themselves be individualized?


Containers and microservices and serverless, oh my!

Instead of using containers to run applications, serverless computing replaces containers with another abstraction layer. Its functions or back-end services are one-job programs, which use compute resources without worrying the developer. Instead of calling functions in the traditional sense, in serverless, a developer calls a working program to provide a service for the program they're building. The Cloud Native Computing Foundation (CNCF) Serverless Working Group defines serverless computing as "building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.” Or for another definition: "Serverless architectures refer to applications that significantly depend on third-party services,” says Mike Roberts, engineering leader and co-founder of Symphonia, a serverless and cloud architecture consultancy. “By using these ideas, and by moving much behavior to the front end, such architectures remove the need for the traditional 'always on' server system sitting behind an application.”


Websites Still Under Siege After 'Drupalgeddon' Redux

Nearly two months after critical Drupal fixes were released, security firm Malwarebytes says it is still finding dozens of unpatched websites that have been exploited to host cryptocurrency miners or in other cases redirect to malware (see Cryptocurrency Miners Exploit Widespread Drupal Flaw). The problems stem from two critical vulnerabilities in Drupal, both of which are remotely executable. That's a perfect combination for attackers: Give them a widely used piece of software such as Drupal, as well as known vulnerabilities that can be easily and remotely exploited without even needing to attempt to trick a victim into taking any action. The first flaw, CVE-2018-7600, was revealed March 28, and the second, CVE-2018-7602, on April 25. The vulnerabilities were so severe that they were dubbed Drupalgeddon 2 and Drupalgeddon 3. Although patches have been available since the vulnerabilities were publicized, attackers are still taking advantage of websites that haven't been upgraded. "Rolling out a CMS is the easy part," writes Jerome Segura, lead malware intelligence analyst with Malwarebytes, in a blog post. "Maintaining it is where most problems occur due to lack of knowledge, fear of breaking something and, of course, costs."


Effective IoT Security Requires Machine Learning

Artificial intelligence (AI) is a branch of computer science that focuses on the theory and development of computer systems that are capable of performing tasks that normally require human intelligence, such as visual perception and decision-making. Machine Learning is a subset of AI that focuses on the practice of using algorithms to parse data, learn from it, and then make a prediction about something. In contrast to a static algorithm, a critical aspect of machine learning is that the machine is “trained” using large amounts of data and algorithms that give the machine the ability to continually learn how to perform a given task. Tools based on machine learning are necessary to supplement the existing set of security tools. These new tools help organizations identify and mitigate the emerging generation of security breaches that are designed to leverage both the legacy and evolving attack surfaces to evade the enterprise’s traditional defenses. When evaluating security tools based on machine learning, there are three key concepts that IT organizations should keep in mind.



Quote for the day:


"There are plenty of difficult obstacles in your path. Don't allow yourself to become one of them." -- Ralph Marston


Daily Tech Digest - May 21, 2018

riello-industrial-iot-main
Smaller micro datacentres utilising a modular approach are making it possible for data processing facilities to be based either on-site or as close to the location as possible. This edge computing is essential as it gives manufacturers the capabilities to run real-time analytics, rather than vast volumes of data needing to be shipped all the way to the cloud and back for processing. Modular datacentres give operators the scope to ‘pay as you grow’ as and when the time comes for expansion. The rise of modular UPS provides similar benefits in terms of power protection requirements too. And with all the additional revenues a datacentre could make from Industry 4.0, the need for a reliable and robust continuous supply of electricity becomes even more imperative. Transformerless modular UPSs deliver higher power density in less space, run far more efficiently at all power loads so waste less energy, and also don’t need as much energy-intensive air conditioning to keep them cool. Any data centre manager planning to take advantage of manufacturers’ growing data demands would be wise to review their current power protection capabilities.



Angular Application Generator - an Architecture Overview


Most of the time, tools like Angular CLI and Yeoman are helpful if we decide to simplify the source generation by following a very strict standard and pragmatic way to develop an application. In the case of Angular CLI, it can be useful when doing scaffolding. Simultaneously, the Yeoman generator delivers very interesting generators for Angular and brings a great deal of flexibility because you can write your own generator in order to fill your needs. Both Yeoman and Angular CLI enrich the software development through their ecosystems: delivering the bulk of defined templates to build up the initial software skeleton. On the other hand, it could be hard to only use templating. Sometimes standardized rules can be translated into a couple of templates, which would be useful in many scenarios. But that wouldn’t be the case when trying to automate very different forms with many variations, layouts and fields, in which countless combinations of templates could be produced. It would bring only headache and long-term issues, because it reduces the maintainability and incurs technical debt.


Asigra evolves backup/recovery to address security, compliance needs

cloud backup, cloud storage, Asigra
Asigra addresses the Attack-Loop problem by embedding multiple malware detection engines into its backup stream as well as the recovery stream. As the backups happen, these engines are looking for embedded code and use other techniques to catch the malware, quarantine it, and notify the customer to make sure malware isn’t unwittingly being carried over to the backup repository. On the flip side, if the malware did get into the backup repositories at some point in the past, the malware engines conduct an inspection as the data is being restored to prevent re-infection. Asigra also has added the ability for customers to change their backup repository name so that it’s a moving target for viruses that would seek it out to delete the data. In addition, Asigra has implemented multi-factor authentication in order to delete data. An administrator must first authenticate himself to the system to delete data, and even then the data goes into a temporary environment that is time-delayed for the actual permanent deletion. This helps to assure that malware can’t immediately delete the data. These new capabilities make it more difficult for the bad guys to render the data protection solution useless and make it more likely that a customer can recover from an attack and not have to pay the ransom.


Moving away from analogue health in a digital world


Technology has an important role to play in the delivery of world-class healthcare. Secure information about a patient should flow through healthcare systems seamlessly. The quality, cost and availability of healthcare services depends on timely access to secure and accurate information by authorised caregivers. Interoperability cannot be solved by any one organisation in isolation. What is needed is for providers, innovators, payers, governing bodies and standards development organisations to come together to apply innovative and agile solutions to the problems that healthcare presents. We can clearly see the pressure that healthcare organisations are under at the moment. Researchers at the universities of Cambridge, Bristol, and Utah found that a staggering 14 million people in England now have two or more long-term conditions – putting a major strain on the UK’s healthcare services. To add to this, recent figures show that more than half of healthcare professionals believe the NHS’s IT systems are not fit for purpose.


Shadow IT is a Good Thing for IT Organizations

The move to shadow IT is a good thing for IT. Why? It is a wake-up call. It provides a clear message that IT is not meeting the requirements of the business. IT leaders need to rethink how to transform the IT organization to better serve the business and get ahead of the requirements. There is a significant opportunity for IT play a leading role in business today. However, it goes beyond just the nuts and bolts of support and technology. It requires IT to get more involved in understanding how business units operate and proactively seek opportunities to advance their objectives. It requires IT to reach beyond the cultural norms that have been built over the past 10, 20, 30 years. A new type of IT organization is required. A fresh coat of paint won’t cut it. Change is hard, but the opportunities are significant. This is more of a story about moving from a reactive state to a proactive state for IT. It does require a significant change in the way IT operates for many. That includes both internally within the IT organization and externally in the non-IT organizations. The opportunities can radically transform the value IT brings to driving the business forward.


Fintech is disrupting big banks, but here’s what it still needs to learn from them

Fintech is disrupting big banks, but hereĆ¢€™s what it still needs to learn from them
Though it’s definitely possible to grow while managing risk intelligently, it’s also true that pressure to match the “hockey-stick” growth curves of pure tech startups can lead fintechs down a dangerous path. Startups should avoid the example of Renaud Laplanche, former CEO of peer-to-peer lender Lending Club, who was forced to resign in 2016 after selling loans to an investor that violated that investor’s business practices, among other accusations of malfeasance. It’s not just financial risk that they may manage badly: the sexual harassment scandal that recently rocked fintech unicorn SoFi shows that other types of risky behavior can impact bottom lines, too. While it might be common for pure tech startups to ask forgiveness, not permission, when it comes to the tactics they use to expand, fintechs should be aware that they’re playing in a different, more risk-sensitive space. Here again, they can learn from banks — who will also, coincidentally, look for sound risk management practices in all their partners. Since the 2008 crisis, financial institutions have increasingly taken a more holistic approach to risk as the role of the chief risk officer (CRO) has broadened.


The Banking Industry Sorely Underestimates The Impact of Digital Disruption


According to McKinsey, many organizations underestimate the increasing momentum of digitization. This includes the speed of technological changes, resultant behavioral changes and the scale of disruption. “Many companies are still locked into strategy-development processes that churn along on annual cycles,” states McKinsey. “Only 8% of companies surveyed said their current business model would remain economically viable if their industry keeps digitizing at its current course and speed.” Most importantly, McKinsey found that most organizations also underestimate the work that is needed to transform an organization for tomorrow’s reality. Much more than developing a better mobile app, organizations need to transform all components of an organization for a digital universe. If this is not done successfully, an organization risks being either irrelevant to the consumer or non-competitive in the marketplace … or both. Complacency in the banking industry can be partially blamed on the fact that digitization of banking has only just begun to transform the industry. No industry has been transformed entirely, with banking just beginning to realize core changes.


Why tech firms will be regulated like banks in the future


First, we have become addicted to technology. We live our lives staring at our devices rather than talking to each other or watching where we are going. It’s not good for us and, according to Pew Research, is responsible for more and more suicides, particularly amongst the young. Pew analysis found that the generation of teens they call the “iGen” – those born after 1995 – is much more likely to experience mental health issues than their millennial predecessors. Second is privacy. Facebook and other internet giants are abusing our privacy rights in order to generate ad revenues, as demonstrated by Cambridge Analytica, but they’re not the only one. Google, Alibaba, Tencent, Amazon and more are all making bucks from analyzing our digital footprints and we let them because we’re enjoying it, as I blogged about recently. Third is that the power of these firms is too much. When six firms – Google (Alphabet), Amazon, Facebook, Tencent, Alibaba and Baidu – have almost all the information on all the citizens of the world held digitally, it creates a backlash and a fear. I’ve thought about this much and believe that it will drive a new decentralized internet


Ericsson and SoftBank deploy machine learning radio network

"SoftBank was able to automate the process for radio access network design with Ericsson's service. Big data analytics was applied to a cluster of 2,000 radio cells, and data was analysed for the optimal configuration." Head of Managed Services Peter Laurin said Ericsson is investing heavily in machine learning technology for the telco industry, citing strong demand across carriers for automated networks.To support this, Ericsson is running an Artificial Intelligence Accelerator Lab in Japan and Sweden for its Network Design and Optimization teams to develop use cases. In introducing its suite of network services for "massive Internet of Things" (IoT) applications in July last year, Ericsson had also added automated machine learning to its Network Operations Centers in a bid to improve the efficiency and bring down the cost of the management and operation of networks. According to SoftBank's Tokai Network Technology Department radio technology section manager Ryo Manda, SoftBank is now collaborating with Ericsson on rolling out the solution to other regions. "We applied Ericsson's service on dense urban clusters with multi-band complexity in the Tokai region," Manda  said.


Models and Their Interfaces in C# API Design


A real data model is deterministically testable. Which is to say, it is composed only of other deterministically testable data types until you get down to primitives. This necessarily means the data model cannot have any external dependencies at runtime. That last clause is important. If a class is coupled to the DAL at run time, then it isn’t a data model. Even if you “decouple” the class at compile time using an IRepository interface, you haven’t eliminated the runtime issues associated with external dependencies. When considering what is or isn’t a data model, be careful about “live entities”. In order to support lazy loading, entities that come from an ORM often include a reference back to an open database context. This puts us back into the realm of non-deterministic behavior, where in the behavior changes depending on the state of the context and how the object was created. To put it another way, all methods of a data model should be predicable based solely on the values of its properties. ... Parent and child objects often need to communicate with each other. When done incorrectly, this can lead to tightly cross-coupled code that is hard to understand.



Quote for the day:
"Strategy Execution is the responsibility that makes or breaks executives" -- Alan Branche and Sam Bodley

Daily Tech Digest - May 20, 2018

Understanding how Design Thinking, Lean and Agile Work Together


Lean started out as a response to scientific management practices in manufacturing. Organisations sought efficiency through processes, rules, and procedures and management was mostly about control. But in modern business, control is a falsehood. Things are too complex, too unpredictable, and too dynamic to be controlled. Lean offers a different mindset for managing any system of work. It’s fundamentally about exploring uncertainty, making decisions by experimenting and learning and empowering people who are closest to the work to decide how best to achieve desired outcomes. Lean says be adaptive, not predictive. Agile is related to Lean. The differences are mostly about what these mindsets are applied to, and how. In conditions of high uncertainty, Agile offers ways to build software that is dynamic and can adapt to change. This isn’t just about pivoting. It’s also about scaling and evolving solutions over time. If we accept that today’s solution will be different from tomorrow’s, then we should focus on meeting our immediate needs in a way that doesn’t constrain our ability to respond when things change later. The heart of Agile is adapting gracefully to changing needs with software.



How to write a GDPR-compliant data subject access request procedure

Recital 63 of the GDPR states, “a data subject should have the right of access to personal data which have been collected concerning him or her, and to exercise that right easily and at reasonable intervals, in order to be aware of, and verify, the lawfulness of the processing.” The procedure for making and responding to subject access requests remains similar to most current data protection laws, but there are some key changes you should be aware of under the GDPR: In most circumstances, the information requested must be provided free of charge. Organisations are permitted to charge a “reasonable fee” when a request is manifestly unfounded, excessive or repetitive. This fee must be based on the administrative cost of providing the information.
Information must be provided without delay and within a month. Where requests are complex or numerous, organisations are permitted to extend the deadline to three months. However, they must still respond to the request within a month to explain why the extension is necessary.



New Robot Swims With No Motor and No Battery

New Robot Swims With No Motor and No Battery
This polymer’s movements activate a switch in the robot's body attached to a paddle. Once the paddle is triggered it behaves like a rowing paddle pushing the small robot forward. The polymer strips have the unique ability to behave differently according to their thickness. Therefore, the engineers used a variety of strip sizes to generate different responses at different times resulting in robots that can swim at different speeds and directions. The engineers were even successful in making the mini robots drop off a payload. "Combining simple motions together, we were able to embed programming into the material to carry out a sequence of complex behaviors," says Caltech postdoctoral scholar Osama R. Bilal, co-first author of the paper.  The team is now researching adding other functionalities to the robots such as responding to environmental cues like pH or salinity. They are also looking into redesign the devices to be self-resetting according to temperature shifts, enabling the robots to swim forever. The engineers have some very ambitious potential projects for their little machines such as delivering drugs and even containing chemical spills. This week was an important one of the autonomy of small robots.


The Fintech Files: Do we expect too much from AI?

While many think AI is the future, lots of financial institutions are using it to tidy up the past. Innovation labs and centres of excellence sound impressive, but behind the grand declarations, experts know that with AI the output is only as good as the input: you need solid data for it to work. And that’s the main problem for so many institutions. Internal data is often messy. For now, Nomura’s AI lab is designed to tackle just that; dealing with historical data as well as making sure all live data is stored consistently so it can be used for analytics in the future. Something else that is holding up progress: Unlike collaborations in blockchain, we haven’t seen many banks teaming up on AI yet. Hardly surprising, given its potential competitive advantages. But it makes it hard to gauge what City firms have achieved so far, Mohideen said. The AI hype started off with visions of robots taking over the trading floor. At the moment, most of them are still just doing admin.


GDPR in real life: Fear, uncertainty, and doubt

why-the-new-data-laws-are-good-for-ux-25-638.jpg
"Most industries face an ambitious regulatory agenda, and have been doing so for years. When considering GDPR two things happened: Firstly, it was de-prioritized in relation with other topics with an earlier deadline, secondly organizations have been -- across the board -- underestimating the impact of the new legislation on processes and systems. When GDPR was eventually picked up in a structural way, it has become increasingly clear to most organizations that, although they will be able to put into place policies and processes, the long tail will be in the implementation of various aspects into the (legacy) IT landscape. This is bound to be a large part of the post May 25 backlog for most of them. Strategically not complying should be a thing of the past, where previous legislation would in the worst case fine relatively small amounts. GDPR more fundamentally will become part of the license to operate with serious implications, both monetary as well as reputational. The biggest fear for heads of communication and board members alike is becoming the showcase in the media in the coming weeks, months."


Optimistic about AI and the future of work

Despite what you may have heard elsewhere, the future of work in a world with artificial intelligence (AI) is not all doom and gloom. And thanks to a research-backed book from Malcolm Frank, What to Do When Machines Do Everything, we have data to prove it. Also, thanks to new educational approaches, we are better equipped to prepare students and misplaced workers for a future with AI. All of these topics were covered at Cornell’s Digital Transformation Summit, where my colleague Radhika Kulkarni and I spoke alongside Frank and some of our country’s top educational leaders. Frank, Executive VP of Strategy and Marketing at Cognizant, says we’re experiencing the fourth industrial revolution. He anticipates that the percentage of job loss from AI will correspond with job loss rates during other periods of automation throughout history, including automation through looms, steam engines and assembly lines. Fundamentally, workforce changes from AI will be like those during the industrial revolution and the introduction of the assembly line. About 12 percent of jobs will be lost. Around 75 percent of jobs will be augmented. And there will be new jobs created.


Gatwick Airport embraces IoT and Machine Learning


Experts are good at finding great ways to utilize limited resources, which is particularly important at Gatwick. When aided by IT however, they can do even more. Machine-learning can detect busy areas in the airport through smartphones and tracking these results over the long term can provide key insights into optimizing day-to-day operations. When making decisions, Gatwick’s management will be aided with powerful data that can provide insights not attainable with more traditional technologies, and new the IT infrastructure will be a key to this analysis. Facial recognition technology will boost security as well as track late passengers, and personalized services based on smartphones or wearable technology can provide valuable updates to travellers on a personal level. Dealing with lost baggage can be a time-consuming and often stressful process. Armed with its new IT infrastructure, Gatwick and its airline operators are poised to offer a better alternative. Being able to track luggage and its owners creates new opportunities for simplifying the check-in and baggage claim process, helping get travellers in and out the the airport in a prompt and seamless manner.


What Is The Future Of Cryptocurrencies And Blockchain?

The cryptocurrency market is highly volatile. More than anything, currently they are serving the investors a speculative instrument. There are huge risks involved and many nations have been restraining it in some way or the other. Many scholars and industrial leaders have stated that Bitcoin and many other cryptocurrencies are some sort of Ponzi schemes. So when asked about what the future of cryptocurrencies looks like, Simon is quite optimistic about it. Rather than focusing upon their day to day movements, if one looks at the big picture, then it is quite evident that these currencies serve some particular function. ... Another important point of observation is that even though more and more nations are banning some or the other cryptocurrency, especially in Southeast Asia, more money has already been invested in the ICOs in the first four months of 2018 than the whole of 2017. This clearly indicates that more and more institutional money has been moving to the crypto world.


Governance and culture will differentiate banks in 2018

It is probably no longer enough for board members to just have a simple understanding of regulatory updates like VAT or IFRS 9. They also need to better understand how emerging technologies such as fintech, regulatory technology (regtech), blockchain, can contribute to disrupt future banking operations and the financial sector in general. Boards of directors at banks still have ample opportunities to reshape their business models, and by leveraging technology, they can develop products that create further competitive advantage. To strengthen governance by the board even further, banks must also develop a skills matrix that includes knowledge and experience in financial reporting and internal controls, strategic planning, risk management, and corporate governance standards. The skills matrix, which should ideally be completed by the board, can include specialist requirements for capital markets, risk management, audit, finance, regulatory compliance and information technology (IT). Some banks in the UAE have started to appoint directors in accordance with the expertise requirements in the matrix.



This cryptocurrency phishing attack uses new trick to drain wallets

Researchers note that MyEtherWallet is an appealing target for attackers because it is simple to use, but its lack of security compared to other banks and exchanges make it a prominent target for attack. Once the user hits a MEWKit page, the phishing attack gets underway with credentials including logins and the private key of the wallet being logged by the attackers. After that, the crooks look to drain accounts when the victim decrypts their wallet. The scam uses scripts which automatically create the fund transfer by pressing the buttons like a legitimate user would, all while the activity remains hidden -- it's the first time an attack has been seen to use this automated tactic. The back end of MEWKit allows the attackers to monitor how much Ethereum has been collected, as well as keeping a record of private user keys and passwords which can potentially be used for further attacks. Those behind MEWKit appear to have been active for some time and have carried out some sophisticated campaigns. Researchers say MewKit demonstrates a "new dedicated effort from threat actors to pursue cryptocurrency"



Quote for the day:


"Do not listen to those who weep and complain, for their disease is contagious." -- Og Mandino


Daily Tech Digest - May 18, 2018

businessman bridges gap
“Elicitation of requirements and using those requirements to get IT onboard and understand what the client really wants, that’s one of the biggest responsibilities for BAs. They have to work as a product owner, even though the business is the product owner,” Gregory says. “[They need to ask:] What do the systems need to do, how do they do it, who do we need to get input from, and how do we get everyone to agree on what we need to do before we go and do it? The BA’s life revolves around defining requirements and prioritizing requirements and getting feedback and approval on requirements,” says Jeffrey Hammond, vice president and principal analyst at Forrester Research. The role of a business analyst is constantly evolving and changing – especially as companies rely more on data to advise business operations. Every company has different issues that a business analyst can address, whether it’s dealing with outdated legacy systems, changing technologies, broken processes, poor client or customer satisfaction or siloed large organizations.



Why AI is the perfect software testing assistant

Software testers are highly analytical, creative problem solvers. To identify hidden defects and areas where users might get frustrated, they must ask what others haven't asked and see what others don't see. But the analytical process takes time, and it isn't always as efficient as today's businesses and the users of their software demand. Artificial intelligence (AI), and its ability to search data sets for golden nuggets, could really come in handy here. An AI tool could quickly locate tests that have already been written to cover a particular scenario or new line of code. The system could even tell testers which test cases are most appropriate for the requirement. Over time, an AI tool could even pinpoint what might be causing the bugs that those tests find, based on past data. When combined with testers' wealth of knowledge about the product and its users, AI has the potential to significantly increase testing efficiency. ... We are beginning to see a few AI-enhanced testing tools hit the market now; initial capabilities include highlighting areas of risk that need further testing or that weren't covered at all. There will be many more advanced tools released in the coming months and years.


Blockchain technology lacks enough use cases to be disruptive, says Worldpay


A lack of strong use cases for blockchain is preventing the technology from disrupting the financial services industry, according to Worldpay. The payment company’s head of technology operations, Jason Scott-Taggart, said the organisation had not ruled out using blockchain in future, but the technology still has some way to go. “You’d be surprised, but in payments blockchain is not as disruptive as people assume it is. There’s not a lot of demand for cryptocurrencies, and blockchain as a technology is not something we have seen a good application for in what we do yet,” he told Computer Weekly in an interview at the ServiceNow Knowledge 18 conference. His view echoes research from Gartner, which found just 1% of CIOs are currently undertaking blockchain projects and 8% plan to start one in the short term. The analyst firm’s vice-president, David Furlonger, said the technology was “massively hyped” and warned “rushing into blockchain deployments could lead organisations to significant problems of failed innovation, wasted investment [and] rash decisions”.


Improve the rapid application development model for deployment readiness

An increasing number of enterprises adapt rapid application development tools rather than reworking their DevOps toolchain. Kubernetes, Marathon and other container orchestration platforms easily combine with continuous integration tools such as Jenkins to make every stage of rapid development, from unit testing through production, part of an explicit flow. The move from idea to prototype is defined in rapid development terms, using rapid development tools. Jenkins, Buildbot, CruiseControl and similar tools frame production as a stage of rapid or continuous development. At each stage, they link to container orchestration for deployment. Simply hosting application code in containers does not guarantee that the orchestration practices for each stage will be comparable, but it does organize the process overall. Containers, and a single orchestration tool, provide commonality across all stages of rapid application development to ensure that every stage is tested, including the transition to production.. The rapid application development model, in both setups, is a string of testing and integration phases linked together.


Adware bundle makes Chrome invisible to launch cryptojacking attacks

screen-shot-2018-05-17-at-12-48-33.jpg
Known as cryptojacking, this practice involves the use of often-legitimate mining scripts which are deployed on browsers without user consent, before funneling the proceeds to mining pools controlled by threat actors. According to the publication, the bundle creates a Windows autorun which launches the Google Chrome browser -- in a way which is invisible. By using specific code to launch the browser, the software forces Chrome to launch in an invisible, headless state. The browser then connects to a mining page whenever the user logs into Windows. This page launches the CoinCube mining script that steals processing power to mine Monero. CPU usage may spike to up to 80 percent, and while victims may notice their PCs are slow, it could be a very long time before the software is uncovered and removed -- or users may simply blame Chrome as the oddity. The researcher opened the website page responsible for the script in a standard browser window and came across an interesting element of the script; the page masquerades as a Cloudflare anti-DDoS page.


Telegrab: Russian malware hijacks Telegram sessions

Cisco Talos researchers Vitor Ventura and Azim Khodjibaev dubbed the malware Telegrab. They analyzed two versions of it. The first one, discovered on April 4, 2018, only stole browser credentials, cookies, and all text files it can find on the system. The second one, spotted less than a week later, is also capable of collecting Telegram’s desktop cache and key files and login information for the Steam website. To steal Telegram cache and key files, the malware is not taking advantage of software flaws. The malware is capable of targeting only the desktop version of the popular messenger because it does not support Secret Chats and does not have the auto-logout feature active by default. This means that the attacker can use those stolen files to access the victim’s Telegram session (if the session is open), contacts and previous chats. Telegrab is distributed via a variety of downloaders, and it checks if the victim’s IP address is part of a list that includes Chinese and Russian IP addresses, along with those of anonymity services in other countries. If it is, it will exit.


Blockchain will be the killer app for supply chain management in 2018

blockchain maersk ibm
Private or "permissioned" blockchains can be created within a company's four walls or between trusted partners and centrally administered while retaining control over who has access to information on the network. Blockchain can also be used between business partners, such as a cloud vendor, a financial services provider and its clients. Bill Fearnley, Jr., research director for IDC's Worldwide Blockchain Strategies, recently returned from visiting company clients in China where he found "everybody wanted to talk about supply chain. "If you build a blockchain ledger within [a single company] that has a certain value," Fearnley said. "The real value for blockchain is when you use distributed electronic ledgers and data to connect with suppliers, customers and intermediaries." One major challenge with supply chain management today involves trade finance record keeping, because a lot of trade finance record keeping is still based on inefficient systems: including faxes, spreadsheets, emails, phone calls and paper.


Zara concept store greets shoppers with robots and holograms

At Zara’s new flagship store in London, shoppers can swipe garments along a floor-to-ceiling mirror to see a hologram-style image of what they’d look like as part of a full outfit. Robot arms get garments into shoppers’ hands at online-order collection points. iPad-wielding assistants also help customers in the store order their sizes online, so they can pick them up later. “Customers don’t differentiate between ordering online or in a store,” spokesman Jesus Echevarria Hernandez said. “You need to facilitate that as best as you can.” The store, which opened Thursday, shows how retailers are increasingly blending online and bricks-and-mortar shopping in a bid to keep up with the might of Amazon.com Inc. Inditex SA, the Spanish company that owns Zara, calls it an example of the technologies it will implement around the world. ... Amazon is moving the other way, building out its physical retail presence. Not only has it acquired grocer Whole Foods Market Inc., it has opened Amazon Go convenience stores, which use artificial intelligence and video cameras in lieu of checkouts, in several U.S. cities.


Icinga Enterprise-Grade Open-Source Network Monitoring That Scales

analytics network monitoring
Icinga runs on most of the popular Linux distros and the vendor provides detailed installation instructions for Ubuntu, Debian, Red Hat (including CentOS and Fedora) and SUSE/SLES. Icinga does not publish specific hardware requirements, but our installation ran well on a quad-core processor with 4 GB RAM and this is probably be a good starting point for a basic installation. ... As with most monitoring applications, storage is an important variable that largely depends on the number of hosts and services monitored and how often information is written to the log. With too little storage, the logs can easily fill up and freeze the system. We were able to quickly install Icinga on Ubuntu 16.04 LTS with just a few simple commands at the prompt. The first step was to download the necessary files to the local repository, and then install the actual Icinga application. Icinga can be used to monitor the availability of hosts and services from switches and routers as well as a variety of network services like HTTP, SMTP and SSH.


CISO soft skills in demand as position evolves into leadership role

You need to be able to understand what engineering is trying to do and what their goals are, what marketing and procurement are doing, what the customer is trying to do and what their goals are. If you can't empathize with what their goals and challenges are, you can't influence. So much flows from that: Your communication skills and communication style will flow from empathy. You also need to be understanding of what we call the data subject -- the consumer who doesn't understand what's happening to their data -- and having empathy for them, as well as empathy with all the stakeholders. It's empathizing with everybody and making the wisest decision to push for the best outcome you can. ... It's important for at least two different reasons. One, from a practical perspective, I've talked a lot about the skills gap. If we're blocking 50% of the planet from joining this career path, we're really contributing to our biggest challenge. Then the other part: Women across the globe are economically oppressed, and information security is a lucrative field. I want to get women into the information security field so they can be financially independent and make a good living.



Quote for the day:


"Leadership - leadership is about taking responsibility, not making excuses." -- Mitt Romney


Daily Tech Digest - May 17, 2018

7 Basic Rules for Button Design


Every item in a design requires effort by the user to decode. Generally, the more time needed for users to decode the UI the less usable it becomes for them. But how do users understand whether a certain element is interactive or not? They use previous experience and visual signifiers to clarify the meaning of the UI object. That’s why it so important to use appropriate visual signifiers (such as size, shape, color, shadow, etc.) to make the element look like a button. Visual signifiers hold an essential information value — they help to create affordances in the interface. Unfortunately, in many interfaces the signifiers of interactivity are weak and require interaction effort; as a result, they effectively reduce discoverability. If clear affordances of interaction are missing and users struggle with what is “clickable” and what is not, it won’t matter how cool we make the design. If they find it hard to use, they will find it frustrating and ultimately not very usable. Weak signifiers is an even more significant problem for mobile users. In the attempt to understand whether an individual element is interactive or not, desktop users can move the cursor on the element and check whether the cursor changes its state. Mobile users don’t have such opportunity.



Serverless deployment lifts enterprise DevOps velocity


At a tipping point of serverless expertise, enterprises will start to put existing applications in serverless architectures as well. Significant challenges remain when converting existing apps to serverless, but some mainstream companies have already started that journey. Smart Parking Ltd., a car parking optimization software maker based in Australia, moved from its own data centers in Australia, New Zealand and the U.K. to AWS cloud infrastructure 18 months ago. Its next step is to move to an updated cloud infrastructure based on Google Cloud Platform, which includes Google Cloud Functions serverless technology, by June 2018. "As a small company, if we just stayed with classical servers hosted in the cloud, we were doing the same things the same way, hoping for a different outcome, and that's not realistic," said John Heard, CTO at Smart Parking. "What Google is solving are the big questions around how you change your focus from writing lots of code to writing small pieces of code that focus on the value of a piece of information, and that's what Cloud Functions are all about," he added.


3 reasons why hiring older tech pros is a smart decision

istock-869391194.jpg
For software engineers in particular, experience counts a lot, Matloff said. "The more experienced engineers are far better able to look down the road, and see the consequences of a candidate code design," he added. "Thus they produce code that is faster, less bug-prone, and more extendible." And in data science, recent graduates may know a number of techniques, but often lack the ability to use them effectively in the real world, Matloff said. "Practical intuition is crucial for effective predictive modeling," he added. Older tech workers also typically have more experience in terms of management and business strategy, Mitzner said. Not only can they offer those skills to the company, they can also act as mentors to younger professionals and pass on their knowledge, she added. "Most people who have been successful in their career would say that they had a great mentor," Mitzner said. "If you have a business that's all 20s to 30s, you could be really missing out on that." Many older employees also appreciate the same flexibility that younger workers do, as they balance work and home life with aging parents and children reaching adulthood, said Sarah Gibson, a consultant with expertise on changing generations in the workforce.


Computes DOS: Decentralized Operating System

“Computes is more like a decentralized operating system than a mesh computer,” replied one of our most active developer partners yesterday. He went on to explain that Computes has all of the components of a traditional computer but designed for decentralized computing. The more I think about it — the more profound his analogy is. We typically describe Computes as a decentralized peer-to-peer mesh supercomputing platform optimized for running AI algorithms near realtime data and IoT data streams. Every machine running the Computes nanocore agent can connect, communicate, and compute together as if they were physical cores within a single software-defined supercomputer. In light of yesterday’s discussion, I believe that we may be selling ourselves short on Computes’ overall capabilities. While Computes is uniquely well positioned for enterprise edge and high performance computing, most of our beta developers seem to be building next generation decentralized apps (Dapps) on top of our platform.


GDPR impact on Whois data raising concern


Cyber criminals typically register a few hundred, even thousands, of domains for their activities, and even if fake details are used, registrants have to use a real phone number and email address, which is enough for the security community to link associated domains. Using high-speed machine-to-machine technology and with full access to Whois data, Barlow said organisations such as IBM were able to block millions of spam messages or delay activity coming from domains associated with the individuals linked to spam messages. While the GDPR is designed to enhance the privacy of individuals, it is having the unintended effect of encouraging domain registrars not to submit registration details to the RDS, which means the information is incomplete and of less value to cyber crime fighters. Without access to Whois data, IBM X-Force analysts predict it might take more than 30 days to detect malicious domains by other methods, leaving organisations at the mercy of cyber criminals during that period.


Brush up on microservices architecture best practices


Isolating and debugging performance problems is inherently harder in microservice-based applications because of their more complex architecture. Therefore, productively managing microservices' performance calls for having a full-fledged troubleshooting plan. In this follow-up article, Kurt Marko elaborated on what goes into successful performance analysis. Effective examples of the practice will incorporate data pertaining to metrics, logs and external events. To then make the most use of tools like Loggly, Splunk or Sumo Logic, aggregate all of this information into one unified data pool. You might also consider a tool that uses the open source ELK Stack. Elasticsearch has the potential to greatly assist troubleshooters in identifying and correlating events, especially in cases where log files don't display the pertinent details chronologically. The techniques and automation tools used for conventional monolithic applications aren't necessarily well-suited to isolate and solve microservices' performance problems.


More Attention Needs to be on Cyber Crime, Not Cyber Espionage

Cyber crime remains a global problem that continues to be innovative and all-encompassing. What’s more, cyber crime doesn’t focus solely on organizations but also on individuals. The statistics demonstrates the magnitude of the cyber crime onslaught. According to a 2017 report by one company, damages incurred by cyber crime is expected to reach USD $6 trillion by 20201. Conversely, cyber security investment is only expected to reach USD $1 trillion by 2021, according to the same source. Furthermore, data breaches continue to afflict individuals. During the first half of 2017, more than 2 billion records were victim of cyber theft, whereas “only” 721 million records were lost during the last half of 2016, a 164 percent increase. According to another reputable source, among the three major classifications of breaches impacting people were identity theft (69 percent), access to financial data (15 percent), and access to accounts (7 percent). With cyber crime communities existing all over the world, these groups and individuals offer professional business goods and services based on quality and reputation that serves to quickly weed out inferior performers, innovation and dependability are instrumental to success.


Data integrity and confidentiality are 'pillars' of cybersecurity


There are two pillars of information security: data integrity and confidentiality. Let's take a simple example: your checking account. Integrity means the number. When you go to an ATM or online or to a teller and check your balance, that number should be easily agreed upon by you and your bank. There should be a clear ledger showing who put money in, when and how much, and who took money out, when and how much. There shouldn't be any randomness; there shouldn't be people putting money in or taking money out without your knowledge or your permission. So, one pillar is making sure the integrity of information -- the code you're running, the executables of your applications -- should be the same ones the developer wrote. Just like the numbers in your bank account, the code you're running should not be tampered with. Then, there's confidentiality. You and your bank should be the only ones who know the numbers in your bank account. When you take confidentiality away from your checking account, it's the same problems when you apply that to your applications and infrastructure.


This new type of DDoS attack takes advantage of an old vulnerability

"Just like the much-discussed case of easily exploitable IoT devices, most UPnP device vendors prefer focusing on compliance with the protocol and easy delivery, rather than security," Avishay Zawoznik, security research team leader at Imperva, told ZDNet. "Many vendors reuse open UPnP server implementations for their devices, not bothering to modify them for a better security performance." Examples of problems with the protocol go all the way back to 2001, but the simplicity of using it means it is still widely deployed. However, Imperva researchers claim the discovery of how it can be used to make DDoS attacks more difficult to attack could mean widespread problems. "We have discovered a new DDoS attack technique, which uses known vulnerabilities, and has the potential to put any company with an online presence at risk of attack," said Zawoznik. Researchers first noticed something was new during a Simple Service Discovery Protocol (SSDP) attack in April. 


This article focuses on the Module System and Reactive Streams; you can find an in-depth description of JShell here, and of the Stack Walking API here. Naturally, Java 9 also introduced some other APIs, as well as improvements related to internal implementations of the JDK; you can follow this link for the entire list of Java 9 characteristics. The Java Platform Module System (JPMS) – the result of Project Jigsaw – is the defining feature of Java 9. Simply put, it organizes packages and types in a way that is much easier to manage and maintain. In this section, we’ll first go over the driving forces behind JPMS, then walk you through the declaration of a module. Finally, you’ll have a simple application illustrating the module system. ... Module descriptors are the key to the module system. A descriptor is the compiled version of a module declaration – specified in a file named module-info.java at the root of the module’s directory hierarchy. A module declaration starts with the module keyword, followed by the name of the module. The declaration ends with a pair of curly braces wrapping around zero or more module directives.



Quote for the day:


"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer