Daily Tech Digest - February 20, 2018

Regression Testing Strategies: an Overview


Change is the key concept of regression testing. The reasons for these changes usually fall into four broad categories: New functionality. This is the most common reason to run regression testing. The old and new code must be fully compatible. When developers introduce new code, they don’t fully concentrate on its compatibility with the existing code. It is up to regression testing to find possible issues; Functionality revision. In some cases, developers revise the existing functionality and discard or edit some features. In such situations, regression testing checks whether the feature in question was removed/edited with no damage to the rest of the functionality; Integration. In this case, regression testing assures that the software product performs flawlessly after integration with another product; and Bug fixes. Surprisingly, developers’ efforts to patch the found bugs may generate even more bugs. Bug fixing requires changing the source code, which in turn calls for re-testing and regression testing.



How the travel industry is using Big Data to tailor-make your holidays


It doesn’t take much paranoia to see how this is obviously beneficial to the airlines: your type of credit card gives a rough idea on your credit score, your billing address can give an idea of your social status, and even your email address says something about you. Plus, it’s easy to spot if you regularly fly alone. Or are your family with you? Is a certain financially-unconnected person always in the seat next to you? Are you flying to a ‘romantic’ location? Did you book a nice hotel, or are you a cheapskate? Are any of your Facebook friends or Twitter followers on the flight? What have you been looking at on the in-flight WiFi? And what events are happening in the area where you bought your flight to? All this data allows airlines to develop better models of their customers, and therefore give them ever better ways of refining their pricing models. Certain airlines are already running reverse auctions on upgrades, but this could be taken further.



5 Ways Blockchain Is Changing The Face Of Innovation of 2018


The volatility in cryptocurrencies is well-known and not for the faint-hearted, especially over recent weeks. Blockchain-based payment network Havven sets out to provide the first decentralized solution to price stability. Designed to provide a practical cryptocurrency, Havven uses a dual token system to reduce price volatility. The fees from transactions within the system are used to collateralise the network, secured by blockchain and supposedly enabling the creation of an asset-backed stablecoin. Think of Tether, but not being tied to the dollar. Each transaction generates fees that are paid to holders of the collateral token and as transaction volume grows, the value of the platform increases. Havven is a low-fee and stable payment network that wants to enable anyone anywhere to transact with anyone else. It's an interesting addition to the increasingly crowded crypto space.


Could we soon be seeing utility cryptocurrency mining?

cryptocurrency mining
Proof-of-work is the main model for cryptocurrency mining and blockchain, especially for Bitcoin. Basically, the way to guarantee the order of transactions is to slow down the system and make it computationally onerous to add a new block – i.e. it takes time and computing capacity. If two blocks are added simultaneously, then it is basically a competition to see who can perform the calculation tasks faster and add more to the chain, because the longer fork wins. The reward for adding a block is to receive some tokens (e.g. Bitcoins). SHA-256 (Secure Hash Algorithm), which came with Bitcoin, is a commonly used model, and there are targets for the hash algorithm value that basically forces it to perform a lot of calculations for each transaction to achieve the targeted value. The benefit of the current algorithm is that the results are easy to check and see whose block is added to the chain. It would probably need quite a lot work to develop models in which miners make some otherwise useful computation for proof of work.


For artificial intelligence to thrive, it must explain itself


The reason for this fear is that deep-learning programs do their learning by rearranging their digital innards in response to patterns they spot in the data they are digesting. Specifically, they emulate the way neuroscientists think that real brains learn things, by changing within themselves the strengths of the connections between bits of computer code that are designed to behave like neurons. This means that even the designer of a neural network cannot know, once that network has been trained, exactly how it is doing what it does. Permitting such agents to run critical infrastructure or to make medical decisions therefore means trusting people’s lives to pieces of equipment whose operation no one truly understands. If, however, AI agents could somehow explain why they did what they did, trust would increase and those agents would become more useful. And if things were to go wrong, an agent’s own explanation of its actions would make the subsequent inquiry far easier. Even as they acted up, both HAL and Eddie were able to explain their actions. 


Build a multi-cloud app with these four factors in mind

multi-cloud adoption
A key driver behind multi-cloud adoption is increased reliability. In 2017, Amazon's Simple Storage Service went down due to a typo in a command executed during routine maintenance. In the pre-cloud era, the consequences of an error like that would be relatively negligible. But, due to the growing dependence on public cloud infrastructure, that one typo reportedly cost upwards of $150 million in losses across many companies. A multi-cloud app -- or an app designed to run on various cloud-based infrastructures -- helps mitigate these risks; if one platform goes down, another steps in to take its place. ... Infrastructure changes should take days, not months. Regardless of the reason -- to save money, to prevent vendor lock-in or simply to run your app in a development environment without design compromises -- writing code without a specific cloud platform in mind ensures it will run on any server.




The Impact Of Artificial Intelligence Over The Next Half Decade

The Impact Of Artificial Intelligence Over The Next Half Decade
You will get a fully automated health checkup every time you take a bath or use the toilet at your house. Body fluids and temperature will be analyzed by sensors and the data will be forwarded to an “AI doctor” that will be able to inform you if there is something wrong with you and how to proceed. Ok, maybe this one will take a little longer than a decade. “ASIMO” alike droids will begin to be sold as “physical personal assistants” – and they’re not so much different from what you can see as the “common” robots in the movie AI; mainly to perfume nursing support to hold population. Cognitive Augmentation – As Maurice Conti explained, we are already “augmented”. Each and every one of us have a smartphone which is connected to the Internet and can easily reach out to a simple service like Google to get immediate knowledge about some unknown fact of life upon needing it. 

What an artificial intelligence researcher fears about AI


Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we’ll find unintended consequences in simulation, which can be eliminated before they ever enter the real world. Another possibility that’s farther down the line is using evolution to influence the ethics of artificial intelligence systems. It’s likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution — and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots. While neuroevolution might reduce the likelihood of unintended consequences, it doesn’t prevent misuse.


Red Hat CIO talks open hybrid cloud, next-generation IT

There's no one single roadblock that exists for the journey, which is ongoing. But the biggest hurdle is one of people, to have your people ready with the skills needed for this. We looked at this and asked: What are the types of skills we need resident in our team to live in this world? Do we want to hire people or leverage contractors? Then we built some programs around efforts to upskill our people; it's incumbent on us to help them learn new skills. But we had a mix of all three [new hires, contractors and upskilled staff]. I don't think it's pragmatic to think you can do one versus the other. I think you need to think all three of those. [On the other hand] just giving it to a provider saying, 'Go figure this out,' is a recipe for disaster. You have to stay very engaged.


Growing an Innovative Culture


Creating an innovative culture requires strong leaders who realise that changes in the culture has to start with themselves. We speak to many executives who think they can change the culture by creating a special team to foster innovation. This is not a "make it so" change. It requires everyone (including the executive) to behave differently in order to change the culture. Most executives and upper management are not motivated to change their behavior, as their rewards system is usually based on short term financial measures and not value delivery to customers and other stakeholders. Organisational risk aversion is another big barrier to innovation. We are frequently asked to provide stories to executives on how their competitors or other organisations much like themselves have implemented innovation. No one wants to be the first to try something new or different for fear of failure.



Quote for the day:


"Leaders who won't own failures become failures." -- Orrin Woodward


Daily Tech Digest - February 19, 2018


The problem is that employee satisfaction can be a double-edged sword. While satisfied employees are good for current activities, that very satisfaction can inhibit innovation. Transformative innovation is difficult. It is far easier to stick with what we know works and tweak the current process than it is to start over. People who are satisfied with the current way of doing business are not likely to transform it. People who transform their organizations must be aggravated enough with the current situation that they’re willing to bear the effort and risk to change it. Leaders who want their organizations to continuously transform must not only look for dissatisfaction on which to capitalize, but also be willing to cultivate dissatisfaction in their employees. ... The right kind of dissatisfaction is a mindset of constantly questioning the status quo and striving for more-than-incremental change. The wrong kind is constantly finding fault with the current situation, arguing that it is somebody else’s fault and assuming it’s somebody else’s responsibility to fix.



Dear IT security pros: it's time to stop making preventable mistakes

5 fumbling dumb mistake
Just think about it – how many log analysis services do you know? They generally all have a nice UI. Same goes for SIEMs. But the confusion comes with the graphic and alert overload – red and green icons telling analysts there are numerous findings that require attention. Security analysts usually don’t know which alerts to start executing on – and it’s hard to determine which alert is of the highest risk and which is just noise because no personnel changed its threshold. And to make matters worse, once a security analyst has opened an alert to start vetting it, they’re usually too scared to close down wide-open-to-the-internet ports because they don’t know the extent of the impact that will have on the company’s production environment. As a security advisor, the thing that really irritates me is just how preventable most (if not all) of the 2017 attacks I researched were. Companies like Equifax are not being decimated by unusually savvy hackers, they are being exposed by their own internal mistakes. Most of these errors are straight out of any “Tech Security 101” textbook.



Global cyber risk perception: Highest management priorities

The survey also found that a vast majority – 75% – identified business interruption as the cyber loss scenario with the greatest potential to impact their organization. This compares to 55% who cited breach of customer information, which has historically been the focus for organizations. Despite this growing awareness and rising concern, only 19% of respondents said they are highly confident in their organization’s ability to mitigate and respond to a cyber event. Moreover, only 30% said they have developed a plan to respond to cyber-attacks. “Cyber risk is an escalating management priority as the use of technology in business increases and the threat environment gets more complex,” said John Drzik, president Global Risk and Digital, Marsh. “It’s time for organizations to adopt a more comprehensive approach to cyber resilience, which engages the full executive team and spans risk prevention, response, mitigation and transfer.”


Meaningful AI Deployments Are Starting To Take Place: Gartner

Meaningful AI deployments are starting to take place: Gartner - CIO&Leader
Meaningful Artificial Intelligence (AI) deployments are just beginning to take place, according to Gartner. Gartner’s 2018 CIO Agenda Survey shows that 4% of CIOs have implemented AI , while a further 46% have developed plans to do so. "Despite huge levels of interest in AI technologies, current implementations remain at quite low levels," said Whit Andrews, research vice president and distinguished analyst at Gartner. "However, there is potential for strong growth as CIOs begin piloting AI programs through a combination of buy, build and outsource efforts," As with most emerging or unfamiliar technologies, early adopters are facing many obstacles to the progress of AI in their organizations. Gartner analysts have identified the following four lessons that have emerged from these early AI projects


Hacking critical infrastructure via a vending machine? The IOT reality

Many are currently, and rightly, concerned about protection from outside threats getting into important networks. The latest firewalls, intrusion prevention systems, advanced protection systems all play a part in defence, but as more and more connected devices enter networks, it is now critical to look at threats from within as well.  If firms do not have proper infrastructure to support IoT devices, they risk exposing their corporate networks to malicious activities. This can lead to devastating effects, especially if hackers uncover vulnerabilities in IoT devices within critical infrastructure. A good starting point for businesses as they take their network security efforts seriously in today's hyper-connected world, is to increase awareness of all the devices on the network and implement centralised management systems that help ensure compliance.


Ok, I Was Wrong, MDM is Broken Too: Insular, Dictatorial MDM Doesn’t Work

Ok, I Was Wrong, MDM is Broken Too: Insular, Dictatorial MDM Doesn’t Work
In master data management, fundamentally, your data problems are not technology problems. They are not even MDM problems. Your data problems aren’t even really well … data problems. They are business problems. They are the problem of getting four business people, three data stewards and several application managers into a room to formally agree on what revenue is for a customer record stored in the sales, marketing, ERP, and finance systems. MDM problems are about getting the right people educated, motivated and in agreement. And this can be messy and difficult. When you succeed with MDM you succeed by working from the business down. When you fail you fail because you design and implement something around a technology first and then you ‘release’ your master data solution to various practitioners around your company and expect them to comply. Like my peers in my freshman programming course we race to implement without spending enough time planning, negotiating and understanding.


Dissect the SQL Server on Linux high availability features


The availability group configurations that provide high availability and data protection require three synchronous replicas. When there is no Windows Server failover cluster, the availability group configuration is stored in the master database on participating SQL Server instances, which need at least three synchronous replicas to provide high availability and data protection. An availability group with two synchronous replicas can provide data protection, but this configuration cannot provide automatic high availability. If the primary replica has an outage, the availability group will automatically fail over. However, applications cannot automatically connect to the availability group until the primary replica is recovered. You can have a mixed availability group that contains both Windows and Linux replicas, but Microsoft only recommends this for data migration.


“Less is More”: Four Steps to Aligning Your Project Queue and Goals Today

aligning-project-queue
Today, as grown ups, “busywork” no longer holds the cachet it once may have. With corporate belts tightening and analytics available that expose the efficacy of each and every tactic, bloat can be harmful or fatal to even the most well intentioned of marketing professionals. And with 40 percent of corporate enterprises still bemoaning the fact that they can’t prove the ROI of their marketing activities, it’s clear that in many marketing departments, the project queue may be filled with plenty to keep the team busy – but is it hitting the mark? I recently spent time with a financial services client that was struggling to define growth, as it battled for market share in a crowded segment. Upon evaluating its marketing portfolio, it was clear that it had completed many projects in the recent past – but only a handful had yielded what one would consider to be “big wins.” 


How to connect to a remote MySQL database with DBeaver

dbeaverhero.jpg
If your database of choice is MySQL, you have a number of options. You can always secure shell into that server and manage the databases from the command line. You can also install a tool like phpMyAdmin or adminer to take care of everything via a web-based interface. But what if you'd prefer to use a desktop client? Where do you turn? One possible option is DBeaver. DBeaver is a free, universal SQL client that can connect to numerous types of databases—one of which is MySQL. I want to show you how to install and use DBeaver to connect to your remote MySQL server. DBeaver is available for Windows, macOS, and Linux. I'll be demonstrating on a Ubuntu 17.10 desktop connecting to a Ubuntu Server 16.04. The installation of DBeaver is fairly straightforward, with one hitch. Download the necessary .deb file from the downloads page and save it to your ~/Downloads directory. Open up a terminal window and change into that directory with the command cd ~/Downloads.


5 things that will slow your Wi-Fi network

snail rocket fast speed
The 2.4 GHz frequency band has 11 channels (in North America), but only provides up to three non-overlapping channels when using the default 20 MHz wide channels or just a single channel if using 40 MHz-wide channels. Since neighboring APs should be on different non-overlapping channels, the 2.4 GHz frequency band can become too small very quickly. The 5 GHz band, however, provides up to 24 channels. Not all APs support all the channels, but all the channels are non-overlapping if using 20 MHz-wide channels. Even when using 40 MHz-wide channels, you could have up to 12 non-overlapping channels. Thus, in this band, you have less chance of co-channel interference among your APs and any other neighboring networks. You should try to get as many Wi-Fi clients as you can to use the 5 GHz band on your network to increase speeds and performance. Consider upgrading any 2.4 GHz-only Wi-Fi clients to dual-band clients.



Quote for the day:



"Learn to do favors not for the people that can later return the favor but for those that need the favor." -- Unknown


Daily Tech Digest - February 18, 2018

Ready or Not, It's Time to Embrace AI

Ready or Not, It's Time to Embrace AIAI has changed online commerce by enabling brands to make sense of their data and put it to good use with smarter algorithms. In this age of conversational commerce, artificial intelligence is critical to providing a personalized experience. Businesses without an AI strategy are almost certain to perish. According to Forrester, insights-driven businesses will "steal" $1.2 trillion per year from their "less-informed peers.” Until a few months back, only bigger companies could afford the sizable investments required to implement AI. That's no longer the case. AI is becoming more accessible to businesses of all sizes. In the next few years, AI will continue to expand its reach throughout organizations. Early adopters already are reaping the benefits. If you're not one of them, now is the time to start.  Here are four reasons you (and every small-business owner) should incorporate AI-enabled technology in your sales and customer-service strategies.


Artificial Intelligence And The Threat To Salespeople


If you work in sales, now is the time to step your game up in a major way. Companies are investing in technology to replace salespeople. The truth is, your company thinks you're overpaid. If you're a salesperson, you're probably making six, seven or eight figures a year, and your company believes it's too much money. Now, listen, I'm not here to give you good news. I'm here to give you the truth. Here's what I see in the wave of the future. Those who know how to program the technology, operate the robots and work with artificial intelligence — the computer programs, algorithms, etc. — will be the salespeople remaining in their jobs. No longer will you be able to say, "People expect service. They want me to answer when the phone rings." Admin jobs will be automated. ... A human. We can expect a slew of robots to replace a lot of mid-level income earners. Many salespeople making six and seven figures a year will be removed, no matter their skills and sales. Artificial intelligence is far stronger than our natural-born intelligence.


Why is it so hard to train data scientists?

A data scientist should be familiar with databases, as many of the world’s data are organized in relational and non-relational databases. For working with a variety of data types the data scientist needs to be able to parse and render files, and convert between data formats. Working with large databases often requires programing skills beyond basic scripting in R or Python, as well as knowledge in algorithm design and operating system. Machine learning is also a required skill. In other words, a complete data scientist should have knowledge in computer science at the level of a trained computer scientist. A data scientist must also be highly familiar with statistics, and understand multiple statistical methods for tasks such as regression, dimensionality reduction, statistical significance analysis, Mote Carlo simulations, and Bayesian methods, to name a few.


Trend Micro Cybersecurity Reference Architecture for Operational Technology

Trend Micro Cybersecurity Reference Architecture for Operational Technology
Vulnerabilities arise particularly when just-in-time manufacturing and a faster speed to market leave less time for product safety testing. These vulnerabilities might not be uncovered until millions of vehicles have been released, in which case the necessary patching procedure is all but certain to prove even more costly — not only to the affected carmaker’s finances but also to its reputation. It’s important, then, for security measures to be properly applied right from the outset of the car manufacturing process, starting in the design phase. That is why it is important for device manufacturer to integrate security into the device itself, to ensure consumers and businesses are protected from these challenges, the minute they install your IoT device. Because of these challenges, Trend Micro have developed a cybersecurity solution called Trend Micro Internet of Thing (IoT)


Is REST the New SOAP?

Almost a decade ago there was a flurry of activity around REST and SOAP based systems. Several authors wrote about the pros and cons of one or the other, or when you should consider using one instead of the other. However, with much attention moving away from SOAP-based Web Services to REST and HTTP, the arguments and discussions died down and many SOA practitioners adopted REST (or plain HTTP) as the basis for their distributed systems. However, recently Pakal De Bonchamp wrote an article called "REST is the new SOAP" in which he compares using REST to "a testimony to insanity". His article is long and detailed but some of the key points he makes include the complexity, in his view, of simply exposing a straightforward API which could be done via an RPC mechanism in "a few hours" yet with REST can take much longer. Why? Because: No more standards, no more precise specifications. Just a vague “RESTful philosophy”, prone to endless metaphysical debates, and as many ugly workarounds.


Where do blockchain opportunities lie? Top FinTech influencers weigh in

FinTech
Any evolution in infrastructure must support the service and product expectations of the marketplace. Perhaps the most notable change in the financial services space would be the transition from corporate to individual data ownership and privacy. This in itself will fundamentally change the relationship between a bank and its customers, as well as industry revenue models. Beyond this, you’ll see intermediaries from our traditional financial services model get squeezed out as blockchain technologies reduce overall risk to any transaction. Additionally, blockchain technology’s standardization of information will enable broader adoption of adjacent new age technologies such as RPA and AI. These technologies leveraged together will move traditional financial services off of spreadsheets. This will require more training and retraining of personnel.


Strava’s privacy PR nightmare shows why you can’t trust social fitness apps

Strava needs its users to share their rides, runs, and swims. After all, the more activities they share—currently users post over 1.3 million activities per day—the more evidence Strava has to encourage others to keep using the app, and perhaps even trade up from the free version to an $8-per-month one. More shared data also means more to feed into Strava’s Metro business, which sells anonymized commuter data to cities. The company wasn’t profitable as of this past fall, but its CEO, James Quarles, clearly sees these two lines of business as the main paths to growth, assuming it gets more and more information from its users. And, frankly, using Strava in a very social way can be addicting. Since it began, in 2009, the company has perfected the art of fitness gamification and competitive sharing. Its app lets you see basic stats from your and your friends’ workouts; it encourages you to give each other kudos for completing activities


“Unlearn” to Unleash Your Data Lake

Figure 2:  Data Science Engagement Process
Sometimes it’s necessary to unlearn long held beliefs (i.e. 2-point shooting in a predominately isolation offense game) in order to learn new, more powerful, game changing beliefs (i.e. 3-point shooting in a rapid ball movement offense). Sticking with our NBA example, Phil Jackson is considered one of the greatest NBA coaches, with 11 NBA World Championships coaching the Chicago Bulls and the Los Angeles Lakers. Phil Jackson mastered the “Triangle Offense” that played to the strengths of the then dominant players Michael Jordan and Kobe Bryant to win those 11 titles. However, the game passed Phil Jackson as the economics of the 3-point shot changed how to win. Jackson’s tried-and-true “Triangle Offense” failed with the New York Knicks leading to the team’s dramatic under-performance and ultimately his firing. It serves as a stark reminder of how important it is to be ready to unlearn old skills in order to move forward.


Why a CHRO Will Be the Next Must-Have Role in the Boardroom


The primary job of any board of directors is to make sure the right leadership team is in place to drive the business, and the CEO is at the heart of that goal. A strong leadership bench is one with a succession plan in place, but this is a delicate topic. There are disclosure issues around such material information, of course, and some CEOs need encouragement to leave when the time is right – whether the change is contentious or not. Similarly, boards are often nervous about the timing of such shifts, particularly when they perceive a lack of a strong successor. Managing through these issues doesn’t come naturally to many board members, but it does for experienced CHROs. Such executives can offer insights on planned transitions and how to navigate the process, from identifying internal candidates to talking about development plans to introducing these topics to chief executives.


How IoT Affects the CISO's Job

"There are a lot of companies that are well positioned to handle IoT, but there are a lot that are so focused on just the day-to-day security work of keeping windows PCs and Linux servers secure, that they haven't gotten started at all," Pesatore says in an interview with Information Security Media Group. CISOs need to take steps to ensure they're involved in device acquisition decisions in all departments within the enterprise, he stresses. "Security and IT need to be involved in the decisions on building and buying these types of devices so we can make sure they are as secure and safe as possible," he says. And security staffs need to diversify their skills as a wider variety of devices are used in the enterprise, he adds. "When you look at the internet of things devices, it's a very heterogeneous world. There are all kinds of different operating systems and software and communications standards," he notes.



Quote for the day:

"The man who is afraid to risk failure seldom has to face success." -- John Wooden


Daily Tech Digest - February 17, 2018

The Three Do’s of DDoS protection

The Three Do’s of DDoS protection
Attackers have been putting DDoS firmly in the IT and Network consciousness – and they did it by substantially raising the bar for just how big and disruptive a DDoS attack can now be. ... DDoS attacks are not just growing in strength and frequency, but also diversifying in whom they target and the diversity of DDOS attacks, application layer as well as volumetric. You no longer need to be a big organisation to be impacted by DDoS – everyone is now a target. And as more of us conduct our business on internet-based systems, the risk of costly disruption grows. Attacks are backed by significant malicious resources, and are most effectively countered by the service provider that connects you to the Internet. DDoS attacks can strike at any time; potentially crippling network infrastructure and severely degrading network performance and reachability. Depending upon the type and severity of an attack on a website or other IP-accessible system, the impact can result in thousands or even millions of dollars of lost revenue.



When Streams Fail: Implementing a Resilient Apache Kafka Cluster at Goldman Sachs


Gorshkov reminded the audience of latency numbers that every programmer should know, and stated that the speed of light dictates that a best-case network round trip from New York City to San Francisco takes ~60ms, Virginia to Ohio takes ~12ms, and New York City to New Jersey takes ~4ms. With data centers in the same metro area or otherwise close, multiple centers can effectively be treated as a single redundant data center for disaster recovery and business continuity. This is much the same approach as taken by modern cloud vendors like AWS, with infrastructure being divided into geographic regions, and regions being further divided into availability zones. Allowing multiple data centers to be treated as one leads to an Apache Kafka cluster deployment strategy as shown on the diagram below, with a single conceptual cluster that spans multiple physical data centers.


Can Cybersecurity be Entrusted with AI?

Will AI be the bright future of security as the sheer volume of threats is becoming very difficult to track by humans alone. May be AI might come out as the most dark era, all depends upon Natural Intelligence. Natural Intelligence is needed to develop AI/machine learning tools. Despite popular belief, these technologies cannot replace humans (My own personal opinion). Using them requires human training and oversight. As the results reveal, AI is here to stay and it will have a large impact on security strategies moving forward but side by side with Natural intelligence. Cybersecurity state as on date is too much vulnerable but implementation of AI systems into the mix can serve as a real turning point. These systems come with a number of substantial benefits. These benefits will help prepare cybersecurity professionals for taking on cyber-attacks and safeguarding the enterprise.


What’s Driving India’s Fintech Boom?

Mobile Payments
Industry analysts expect that payments will be a pathway to other areas such as lending, insurance, wealth management and banking. “Most people in India lack credit history. Digital payments give them a credit history which can be leveraged in other areas,” explains Prantik Ray, professor of finance at XLRI – Xavier School of Management. Ravi Bapna, professor of business analytics and information systems at the Carlson School of Management, University of Minnesota, adds: “Innovative data-driven and behavioral risk management models can overcome barriers that arise from lack of widespread and robust credit scoring of individuals.” Rajesh Kandaswamy, research director-banking and securities at Gartner, points out that in mature geographies, payment mechanisms are already evolved and basic banking services are a given. However, in countries like China and India, digital payments are evolving in tandem with the growth in ecommerce.


In a digital world, do you trust the data?

Trust is now a defining factor in an organization's success or failure. Indeed, trust underpins reputation, customer satisfaction, loyalty and other intangible assets. It inspires employees, enables global markets to function, reduces uncertainty and builds resilience. The problem is that - in today's environment - trust isn't just about the quality of an organization's brands, products, services and people. It's also about the trustworthiness of the data and analytics that are powering its technology. KPMG International's Guardians of trust report explores the evolving nature of trust in the digital world. Based on a survey almost 2,200 global information technology (IT) and business decision-makers involved in strategy for data initiatives, this report identifies some of the key trends and emerging principles to support the development of trusted analytics in the digital age.


The Great Disruption of Your Career

Seriously; even coffee shops are now using affordable facial recognition technology with basic CRM to create an amazing experience for customers... "Hi Tony, your triple-shot decaf, skim, soy latte is on its way... did you manage to go water-skiing on the weekend?" Perfect... I'll be able to keep my head down deleting spammy emails while rocking away to Spotify... no need place an order in advance or give eye contact or interact with anyone while securing my morning caffeine fix :-) White collar professions are not immune to the employment apocalypse. Combinations of technology with offshoring to lower cost markets are already biting like a savage dog at your crotch. Do you lay awake at night wondering how you can make yourself indispensable? What do you really do that cannot be automated?


Designing, Implementing, and Using Reactive APIs


Reactive programming is a vast subject and is well beyond the scope of this article, but for our purposes, let’s define it broadly as a way to define event driven systems in a more fluent way than we would with a traditional imperative programming style. The goal is to move imperative logic to an asynchronous, non-blocking, functional style that is easier to understand and reason about.  Many of the imperative APIs designed for these behaviors (threads, NIO callbacks, etc.) are not considered easy to use correctly and reliably, and in many cases using these APIs still requires a fair amount of explicit management in application code. The promise of a reactive framework is that these concerns can be handled behind the scenes, allowing the developer to write code that focuses primarily on application functionality. The very first question to ask yourself when designing a reactive API is whether you even want a reactive API! Reactive APIs are not the correct choice for absolutely everything.


Wireless Reshaping IT/OT Network Best Practices

Wireless Reshaping IT/OT Network Best Practices
IoT, its accompanying cloud services and Big Data analytics, routinely deliver immense and unheard-of amounts of data from devices and sensors. That means network architectures continue to adapt and will change dramatically to implement the data flow from these sensors. That also means networks will become outward focused, as the amount of data acquired from edge devices dwarf the amount of data produced inside the network. Previously, network architecture for wireless used a design that had a wireless access point directly and quickly connected to wired Ethernet. Network backhauls were always wired. However, in more recent times, companies with sprawling multi-building campuses, manufacturing, or process plants, have been using wireless backhauls. Some of these are using WiMAX (IEEE 802.16) as broadband microwave links. Others are designed as optical. These wireless backhauls are significantly less expensive to install, and provide secure data transmission.


GDPR: The Data Subject Perspective

The discussion that followed highlighted a key point: the value of the data means that stakes are high. Organizations are understanding how much value can be driven by intelligent use of data. My opinion is that many individuals have sold themselves short in negotiations around use of personal data. This is because individual data subjects have had limited knowledge, power or influence at a negotiating table that doesn’t really exist – unlike the agreement process for other contracts in which both parties are normally well informed. GDPR implication: The key is intelligent use of data. Personal data which is not managed correctly will have less impact on an organization’s bottom line, and will become a burden under GDPR. Organizations should review their data collection mechanisms and consider data minimisation, and data masking technology to implement privacy by default and design.


A business guide to raising artificial intelligence in a digital economy

The report highlights a need for a fundamental shift in leadership that is required to cultivate partnerships with customers and business partners, and to further accelerate the adoption of artificial intelligence as the fuel for enterprises to grow and deliver social impact. Accenture's 2018 report ...  highlights how rapid advancements in technologies -- including artificial intelligence (AI), advanced analytics and the cloud -- are enabling companies to not just create innovative products and services, but change the way people work and live. This, in turn, is changing companies' relationships with their customers and business partners. "Technology," said Paul Daugherty, Accenture's chief technology and innovation officer, "is now firmly embedded throughout our everyday lives and is reshaping large parts of society. This requires a new type of relationship, built on trust and the sharing of large amounts of personal information."



Quote for the day:


"A wise man gets more use from his enemies than a fool from his friends." -- Baltasar Gracian


Daily Tech Digest - February 16, 2018

5 early warning signs of project failure
One of the first (and biggest) warning signs that your project may be headed for failure is an internal culture that is resistant to change. Projects bring about improvements in workflows and new operational best practices, often with an increased use of technology. These changes can create a significant amount of fear, as employees assume the end result will mean job losses or major disruption to their individual working world. Many projects have been internally sabotaged right from the start as result of these fears. How can you tell if you have a culture that is resistant to change? Employees who are resistant to change are often reluctant to share information and exhibit negative attitudes towards the project and its benefits, either through direct communication or body language and facial expressions. Alleviating these fears by creating a culture that embraces change is key.



A quick-and-dirty way to predict human behavior

A quick-and-dirty way to predict human behavior
Machine learning and AI technologies are everywhere. One of the top uses is to predict human behavior. Luckily, people are creatures of habit. Moreover, when given the freedom to do anything they want, most people will do what everyone else is doing (I’m paraphrasing a badly remembered quote). That makes is kind of easy to predict what people will do next, at least statistically. Imagine you go to a website and start rating things. First you rate a cat picture, then a baseball, and then a Magpul FMG-9. There were also a few things you didn’t rate on the same page. Assuming that someone else made similar rankings as you, we can probably “guess” what you’d rank the other things. ... The algorithm that many recommendations are based on is called Alternating Least Squares (or some form of it). With ALS, you use a training set or, if you have a lot of users, you can use some of them as the training set to rate the others.


HP expands its Device-as-a-Service offering to include Apple

repair and replace broken mobile devices managed mobility services
Through its DaaS offering, HP determines the contractual relationship enterprises want to have, whether it's with a value-added reseller, a global systems integrator or a direct relationship with HP, "and then we provide it back to you within a utility model or a per-user, per-device pricing model," said Jonathan Nikols, global head of HP's Device-as-a-Service. For example, the cost of a contract would include an SLA on how fast the turnaround time on a device repair and replacement should be – whether it's next day or in four days. When an end user's device breaks or needs replacing, they file a help-desk ticket just as they would with any IT shop; the ticket is automatically routed to the HP DaaS service. The service also handles employee on-boarding and off-boarding. Mixed-device environments are the norm now, HP said, making it increasingly difficult and costly for organizations to manage multiple device types, OSes and vendors.


Who should buy a Ryzen APU, and who shouldn't

ryzen 3 2200g 9
If you're asking yourself, "should I buy a Ryzen APU?" for a new budget gaming PC, the short answer is yes, probably. That's because for building a ground-up, entry-level gaming machine, the Ryzen APU is the best game in town, and possibly the only game for DIY builders, in the face of wallet-busting GPU prices. But for everyone? Well, no. There is no one-size fits all answer, so read on to learn who should buy the Ryzen APU—and who shouldn't.  ... AMD's new APUs have essentially enough CPU and GPU power to enable satisfying gaming at 720p to 1080p. Both APUs combine quad-core Zen x86 cores with up to 11 Vega graphics cores, and the Ryzen 5 2400G also has SMT. The integrated graphics basically offers from double to triple the gaming performance of Intel's HD 630 graphics, which is inside everything from an $85 Pentium to a $380 Core i7.


How your company can prevent a data breach – and what to do if one occurs
As any executive whose company has suffered a data breach knows, the true costs of cybercrime are devastating, far-reaching and continue long after business functions have been restored. Between investigation and repair costs, customer notification requirements, contractual liabilities and workflow continuity, worldwide spending to mitigate the impact of cyberattacks is projected to reach an unprecedented $90 billion this year. Then there are the indirect costs, which include legal fees and public reputation rebuilding. This last component is particularly crucial, since a recent Gemalto survey revealed that 70 percent of consumers said they would cut ties with a company that had suffered a cyberattack. Indeed, businesses are anticipated to bear the brunt of cybercrime’s growing financial burden.Over half of last year’s cyberattacks targeted corporations; and among all small businesses, 58 percent had been personally hit by data breach.


Juniper Networks Expands Portfolio for Secure Multicloud Computing


“The promise of multicloud is to deliver an infrastructure that is secure, ubiquitous, reliable and fungible and where the migration of workloads will be a simple and intuitive process,” said Bikash Koley, chief technology officer at Juniper Networks. “For IT to be successful in becoming multicloud-ready, it is critical organizations consider not only the data center and public cloud, but also the on-ramps of their campus and branch networks. Otherwise, enterprises will face fractured security and operations as network boundaries prevent seamless, end-to-end visibility and control.” A Juniper-commissioned study by PwC found that workload migration is underway in the next three years across every core functional area, such as customer service, systems management, marketing, compute bursting, business applications, DevOps and backup and recovery.


Bitcoin thieves use Google AdWords to target victims

screen-shot-2018-02-15-at-09-27-25.jpg
The fraudsters established "gateway" phishing links that appeared in search results when potential victims searched Google for cryptocurrency-related keywords, such as "blockchain" or "bitcoin wallet." These links, bolstered by the purchase of Google AdWords, would then send victims to malicious domains, which would serve phishing content depending on the IP address and likely language of the visitor. According to the team, the hackers are focusing on countries where access to traditional banking may be difficult, such as Estonia, Nigeria, Ghana, and a number of other African countries. When access to banking is difficult, cryptocurrency, as decentralized assets recorded on the Blockchain, may empower users financially. However, it seems that the cybercriminals behind the campaign also know there may be more interest from residents of these countries, and so, this idea has decided the focus of phishing campaigns.


Google's Android P will make it easier for OEMs to copy iPhone X

ipxa.jpg
Google's intent on making Assistant more visible in Android comes as the personal digital assistant market is becoming too crowded, forcing potential competitors out. While Amazon's Alexa and Microsoft's Cortana are available as downloadable apps, Samsung's built-in and much maligned Bixby assistant continues to linger, though The Verge has called for Bixby's death. Cortana was conspicuously absent at CES 2018, leading ZDNet's Larry Dignan to declare the trade show "Cortana's Funeral." Facebook announced the discontinuation of their virtual assistant, M, in January. For screen cutouts, it is yet to be seen if this design trend will continue to persist, or if this will wind up as a fad similar to 3D phones. While Apple's use of the technology is notable, it seems unlikely that manufacturers are holding back on shipping phones for lack of OS-level support. Of note, Sharp also produced a handful of 3D smartphones available primarily in Japan.


Nokia is re-evaluating its wearables division


Once a global leader in mobile, the company failed to embrace the smartphone revolution, selling to Microsoft, which then shuttered the whole thing entirely. Of course, the Nokia name is back in the smartphone space, but that comes under a licensing deal through HMD — a company founded by former execs from the company. Interestingly, recent numbers show that the brand has actually been doing pretty well. Wearables, on the other hand, have stagnated, forcing brands to exit the space, sell or shutter entirely. The herd has thinned over the past year, and even top names like Fitbit have struggled to keep their head above water. For Nokia, acquiring a company like Withings no doubt seemed like a quick way to hit the ground running — but the timing was rough on this one. Hopefully this doesn’t mark the end of the Withings/Nokia Health line, which made some really solid and innovative devices.


Cloud sync vs backup: Which disaster recovery works better for business continuity?

Backup is the traditional way most businesses protect their digital assets from disaster. At regular intervals, changes in local storage are transferred to either a local backup device or a cloud backup service. Usually, these changes are incremental and go into backup archives. A good backup service will store ongoing snapshots, so it's always possible to go back in time and recover an old document. The gotcha with backup systems is that recovery is often cumbersome. You usually have to launch a backup program on your PC, dig through the various backup instances, and initiate a restore. In most cases, you can't really use or read the files in the backups until they're restored to your computer. Cloud sync, by contrast, takes files that exist on your local computer and moves them into a cloud infrastructure. Most cloud infrastructures encourage you to work on the files in the cloud.



Quote for the day:


"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman


Daily Tech Digest - February 15, 2018

How cloud computing surveys grossly underreport actual business adoption

rightscale-state-of-the-cloud-report-public-cloud-spend-increases.png
Whatever the size of the IT department, all companies are having to fundamentally rethink their applications, with cloud-first increasingly a matter of survival. One example I am familiar with is that of a large enterprise that was trying to figure out how to rearchitect a massive application first conceived in the early 2000s. At the time it was first built, the enterprise had very different needs from today—thousands of users, gigabytes or terabytes of data, customers all sitting in the same region, performance important but not all-consuming. This enterprise built on internal servers and focused on a scale-up model. That's all there was. Today, that same application has millions of users, distributed globally. The data volume is in the petabytes (and approaches exabytes). Performance latency must be measured in milliseconds and, in some cases, microseconds. There is no option but cloud. More applications look like this today than the earlier instantiation of that application.



How AI will underpin cyber security in the next few years


Artificial intelligence (AI) is emerging as the frontrunner in the battle against cyber crime. With autonomous systems, businesses are in a far better place to strengthen and reinforce cyber security strategies. But does this technology pose challenges of its own? Large organisations are always exposed to cyber criminals, and so they need appropriate infrastructure to spot and combat threats quickly. James Maude, senior security engineer at endpoint security specialist Avecto, says systems incorporating AI could save firms billions in damage from attacks. “Although AI is still in its infancy, it’s no secret that it is becoming increasingly influential in cyber security,” he says. “In fact, AI is already transforming the industry, and we can expect to see a number of trends come to a head, reshaping how we think about security in years to come. We might expect to see AI applied to cyber security defences, potentially avoiding the damage from breaches costing billions.”


IBM sees blockchain as ready for government use

google trends blockchain
There is a growing concern that cryptocurrency could be a threat to the global financial system through unbridled speculation and unsecured borrowing by consumers looking to purchase the virtual money. ... "First and foremost, blockchain is changing the game. In today's digitally networked world, no single institution works in isolation. At the center of a blockchain is this notion of a shared immutable ledger. You see, members of a blockchain network each have an exact copy of the ledger," Cuomo said. "Therefore, all participants in an interaction have an up-to-date ledger that reflects the most recent transactions – and these transactions, once entered, cannot be changed on the ledger." For blockchain to fulfill its potential, it must be "open," Cuomo emphasized, and based on non-proprietary technology that will encourage widespread industry adoption by ensuring compatibility and interoperability.


7 threat modeling mistakes you’re probably making

iot threats security
The Open Web Application Security Project (OWASP) describes threat modeling as a structured approach for identifying, quantifying and addressing the security risks associated with an application. It essentially involves thinking strategically about threats when building or deploying a system so proper controls for preventing or mitigating threats can be implemented earlier in the application lifecycle. Threat modeling as a concept certainly isn't new, but few organizations have implemented it in a meaningful way. Best practices for threat models are still emerging says Archie Agarwal, founder and CEO of ThreatModeler Software. "The biggest problem is a lack of understanding of what threat modeling is all about," he says. There are multiple ways to do threat modeling and companies often can run into trouble figuring out how to look at it as a process and how to scale it. "There is still a lack of clarity around the whole thing."


Skype can't fix a nasty security bug without a massive code rewrite

Security researcher Stefan Kanthak found that the Skype update installer could be exploited with a DLL hijacking technique, which allows an attacker to trick an application into drawing malicious code instead of the correct library. An attacker can download a malicious DLL into a user-accessible temporary folder and rename it to an existing DLL that can be modified by an unprivileged user, like UXTheme.dll. The bug works because the malicious DLL is found first when the app searches for the DLL it needs. Once installed, Skype uses its own built-in updater to keep the software up to date. When that updater runs, it uses another executable file to run the update, which is vulnerable to the hijacking. The attack reads on the clunky side, but Kanthak told ZDNet in an email that the attack could be easily weaponized. He explained, providing two command line examples, how a script or malware could remotely transfer a malicious DLL into that temporary folder.


Cryptomining malware continues to drain enterprise CPU power

Global Threat Impact Index January 2018
“Over the past three months cryptomining malware has steadily become an increasing threat to organizations, as criminals have found it to be a lucrative revenue stream,” said Maya Horowitz, Threat Intelligence Group Manager at Check Point. “It is particularly challenging to protect against, as it is often hidden in websites, enabling hackers to use unsuspecting victims to tap into the huge CPU resource that many enterprises have available. As such, it is critical that organizations have the solutions in place that protect against these stealthy cyber-attacks.” In addition to cryptominers, researchers also discovered that 21% of organizations have still failed to deal with machines infected with the Fireball malware. Fireball can be used as a full-functioning malware downloader capable of executing any code on victims’ machines. It was first discovered in May 2017, and severely impacted organizations during Summer of 2017.


Intel launches new Xeon processor aimed at edge computing

Intel Xeon D-2100 processor
Edge computing is an important, if very early stage, development that seeks to put computing power closer to where the data originates, and it is seen as working hand in hand with Internet of Things (IoT) devices. IoT devices, such as smart cars and local sensors, generate tremendous amounts of data. A Hitachi report (pdf) estimated that smart cars would at some point generate 25GB of data every hour. This can’t all be sent back to data centers for processing. It would overload the networks and the data centers. Instead, edge computing processes the data at its origin. So, smart car data generated in New York would be processed in New York rather than sent to a remote data center. Major data center providers, such as Equinix and CoreSite, offer such services at their data centers around the country, and startup Vapor IO offers ruggedized mini data centers that can be deployed at the base of cell phone towers.


Q# language: How to write quantum code in Visual Studio

Q# language: How to write quantum code in Visual Studio
Designed to use familiar constructs to help program applications that interact with qubits, it takes a similar approach to working with coprocessors, providing libraries that handle the actual quantum programming and interpretation, so you can write code that hands qubit operations over to one Microsoft’s quantum computers. Bridging the classical and quantum computing worlds isn’t easy, so don’t expect Q# to be like Visual Basic. It is more like using that set of Fortran mathematics libraries, with the same underlying assumption: that you understand the theory behind what you’re doing. One element of the Quantum Development Kit is a quantum computing primer, which explores issues around using simulators, as well as providing a primer in linear algebra. If you’re going to be programming in Q#, an understanding of key linear algebra concepts around vectors and matrices is essential—especially eigenvalues and eigenvectors, which are key elements of many quantum algorithms.


Breaking the cycle of data security threats


First, there’s the lack of mandatory reporting and the limits of voluntary reporting. Second, the lack of real protection for the personal information we’ve entrusted to various companies. Third, the clear indication that CEOs and corporations still aren’t paying enough attention to cybersecurity issues; perhaps because there’s been a startling lack of real penalty for failing to protect information from hackers. Finally, there’s a need to recognize that securing information is hard work on an ongoing basis. It’s a truism of security that no product is a “silver bullet” to put an end to attacks. Another industry truism says security is a journey, not a destination. There are few regulations that require organizations to report data breaches, especially those outside financial services and health care. Is it any surprise that companies are reporting breaches years after they occurred? How many unreported breaches will never surface?


The Top Five Data Governance Use Cases and Drivers

As the applications for data have grown, so too have the data governance use cases. And the legacy, IT-only approach to data governance, Data Governance 1.0, has made way for the collaborative, enterprise-wide Data Governance 2.0. In addition to increasing data applications, Data Governance 1.0’s decline is being hastened by recurrent failings in its implementation. Leaving it to IT, with no input from the wider business, ignores the desired business outcomes and the opportunities to contribute to and speed their accomplishment. Lack of input from the departments that use the data also causes data quality and completeness to suffer. So Data Governance 1.0 was destined to fail in yielding a significant return. But changing regulatory requirements and mega-disruptors effectively leveraging data has spawned new interest in making data governance work.



Quote for the day:


"Technological change is not additive; it's ecological. A new technology does not merely add something; it changes everything." -- Neil Postman