Daily Tech Digest - December 02, 2018

How Technology is Changing the Lending Landscape
For lenders, adoption of tech-enabled risk modelling techniques, for instance, removes limitations associated with manual credit assessment and directly translates to speedier disbursement of credit to qualifying applicants. Beyond enhancing internal processes, technology is enabling lenders to target potential applicants based on their Internet search and social media behaviour patterns and helps expand sales pipelines and reach beyond target markets. ‘Pay-as-you-use’ tech offered by new age fintech vendors greatly helps level the playing field for newer players against well-heeled larger players. The arrival of tech-enabled alternative financing platforms like ‘Peer to Peer’ and ‘New to Credit’ lending, is also increasing choice for the borrower. In the pre-tech era, borrowing was like buying groceries from the only shop in your neighbourhood. You had to buy what was available. Lenders, banks and non-banking finance companies, offered a fixed set of loan options and borrowers had no other choice but to settle for whatever was available.


Getting your Fintech Ready for Investment with haysmacintyre

For many start-ups, an ‘angel’ investor will be the preferred choice. These investors can offer extensive experience and expertise, helping the business leaders through unfamiliar territory, whilst standing back from the day-to-day management of the business. Often referred to as “patient capital”, angel investors are generally less concerned with rapid returns, supporting the business throughout its growth. Alternatively, fintech companies may turn to venture capital (VC) investment. It is, however, worth considering that the VC investor will want to exit at some point down the line, many frequently departing within five years of Series A investment. It should also be kept in mind that they may want some control over the day-to-day operation of the business, and would possibly want a position on the board. Start-up businesses often receive investment that is particularly hands-off, where the investors pay little attention to day-to-day matters. As the scale of investment increases, businesses should prepare for this dynamic to change.


Redesigning the Office App Icons to Embrace a New World of Work


Today’s workforce includes five generations using Office on multiple platforms and devices and in environments spanning work, home, and on the go. We wanted a visual language that emotionally resonates across generations, works across platforms and devices, and echoes the kinetic nature of productivity today. Our design solution was to decouple the letter and the symbol in the icons, essentially creating two panels (one for the letter and one for the symbol) that we can pair or separate. This allows us to maintain familiarity while still emphasizing simplicity inside the app. Separating these into two panels also adds depth, which sparks opportunities in 3D contexts. Through this flexible system, we keep tradition alive while gently pushing the envelope. ... To reflect this in the icons, we removed a visual boundary: the traditional tool formatting. Whereas prior Office icons had a document outline for Microsoft Word and a spreadsheet outline for Excel, we now show lines of text for Word and individual cells for Excel.


3 Signs of a Good AI Model

3 Signs of a Good AI Model
The first step in understanding how to achieve XAI is to understand what a model is and how it works. Simply stated, a model is a set of transformations that convert raw data into information, most often by applying statistics and advanced mathematical constructs such as calculus and linear algebra. What makes AI models different from traditional data transformations is that the model is constructed by employing algorithms to expose patterns from historical data; those patterns form the basis for the mathematical transformation. Traditional data transformations are most often a set of directives and rules established and programmed by a developer to achieve a specific purpose. Because AI models learn from having more data, they can be regenerated periodically to sense and adjust to changes in the underlying behaviors associated with the transformation. One of the strengths of AI is that the process of creating a model can identify patterns that are not obvious and intuitive by looking at the data.


The future of cash in Canada

The Bank of Canada staff economists considered all of this in a discussion paper released this fall called "Is a Cashless Society Problematic?" The paper cites the consistent decline of cash payments in Canada for decades. It also mentions an analysis by Forex Bonuses, which declared Canada the top country in the world embracing cashless technology. A very close second was Sweden, a country where the government is now studying how going cashless could affect the nation. The findings, referenced in the paper released by the Bank of Canada, focused on indicators such as the number of credit cards per person and the volume of cashless transactions. To save drivers time and to reduce traffic congestion, New York state switched to cashless toll booths at Grand Island, where millions of tourists travelling to Niagara Falls pass through every year. For users without a pass, the state mails the registered owner a bill. The complication is mailing bills to Canadian addresses attached to license plates. The state can't access that information, which they didn't fully consider when implementing the system.


Enterprises face these 3 challenges while adopting AI


Several barriers to AI adoption, ranging from analyzing disparate data to identifying the right AI use case to hiring the best talent, hold back companies from seizing AI opportunities. For quite some time now, there has been a lot of buzz around AI and its promise to disrupt industries altogether. From digital assistants to robotic process automation to self-driving cars, AI has offered cool and innovative applications, which have only been the subject of science fiction. Today, AI has reached a level of precision where it can understand human emotions too. The power of AI to make machines ‘smart’ and ‘intelligent’ has triggered a lot of industries to invest in AI projects. The decision of leveraging AI to aid digital transformation is pretty understandable. But, companies should first analyze the potential barriers to AI adoption so that they can enjoy successful AI implementation. While companies think of leveraging AI for transforming their existing workflows, they should keep in mind these potential hurdles, and plan their journey with AI accordingly.


IoT Trends to Watch for in 2019

Throughout 2018, the staggering growth of digital assistants, such as Amazon’s Alexa and Google Home, showed that smart devices are here to stay. While the concept of smart toasters has been a long-running joke, analysts forecast strong growth among consumer-facing IoT devices of all shapes and sizes. A bit of smart technology can simplify our home lives; automated vacuum cleaners have long been popular, but adding in some smart technology can make them even more useful. Smart technology is already make home security systems far more capable, and bringing smart technology into the kitchen can make it easier to save time while preparing meals. Businesses are certainly looking into investing in smart technology, and smart desks and smart walls are expected to become far more common in 2019. If there’s any perceived benefit of adding IoT technology to a consumer device, someone is likely to offer it for sale, even if the value added is dubious. Edge devices are have become staples of typical IoT installations, as they allow for more efficient operations and better responsiveness. 


Azure Service Fabric Mesh: A Platform for Building Mission Critical Microservices


The Service Fabric Cluster provides you with a reliable and scalable cluster of VMs running the service fabric runtime into which you deploy and manage your applications/services (containerized or non-containerized) via a highly available cluster endpoint. The service fabric runtime makes the service placements decisions based on the integration it has with the underlying azure infrastructure, making them reliable. When using Azure Service Fabric Clusters, you have to administrator access to not only your cluster, but also the VMs that make up the cluster. You pick the VM SKUs to meet your needs, you get to decide on the network security rules and the autoscale rules by which you want to scale the cluster. You can set up automatic upgrades of the service fabric runtime and the VM operating system. With this offering, you are only paying of the VMs, Storage and Networking resources you use, the service fabric runtime is effectively free. It is great fit for customers/ISVs who want need full control for the infrastructure.


Adding Object Detection with TensorFlow to a Robotics Project


My robot uses the Robot Operating System (ROS). This is a de facto standard for robot programming and in this article we will integrate TensorFlow into a ROS package. I'll try and keep the details of the ROS code to a minimum but if you wish to know more can I suggest you visit the Robot Operating System site and read my articles on Rodney. Rodney is already capable of moving his head and looking around and greeting family members that he recognises. To do this we make use of the OpenCV face detection and recognition calls. We will use TensorFlow in a similar manner to detect objects around the home, like for instance a family pet. Eventually the robot will be capable of navigating around the home looking for a particular family member to deliver a message to. Likewise imagine you are running late returning home and wished to check on the family dog. With the use of a web interface you could instruct the robot to locate the dog and show you video feed of what it's doing. Now in our house we like to say our dog is not spoilt, she is loved.


The Digital Twin Organization: Can Enterprise Architecture Help?

The Digital Twin of the Organization is a concept created by Gartner. Quite simply, it is predicated on using a digital representation of an organization (its business model, strategies etc.) to better plan and execute a business transformation initiative. The whole idea behind the digital twin concept, and the reason why it is so useful, is that it offers a virtual model that can be analyzed and tweaked more easily than the real thing. The new insights and efficiencies you uncover this way can in turn be used to improve the organization.  Model is the key word here. Models are massless, frictionless, virtually free, reusable, and – importantly – they are also the lifeblood of enterprise architecture. Thus, EA is by default positioned to play a key part in taking the Digital Twin of the Organization from concept to reality. We have been arguing the importance of a model-based approach to business change for quite some time on this blog, now it seems the future is starting to catch up. Let us have a more detailed look at how exactly EA helps, and offer some examples based on the BiZZdesign suite.



Quote for the day:


"To be able to lead others, a man must be willing to go forward alone." -- Harry Truman


Daily Tech Digest - December 01, 2018

Angry man yelling on phone while reading vintage printer paper report. Photo by SHutterstock
Blockchain has been wildly mis-sold, but underneath it is a database with performance and scalability issues and a lot of baggage. Any claim made for blockchain could be made for databases, or simply publishing contractual or transactional data gathered in another form. Its adoption by non-technical advocates is faith-based, with vendors' and consultants' claims being taken at face value, as Eddie Hughes MP (Con, Walsall North) cheerfully confessed to the FT recently. "I'm just a Brummie bloke who kept hearing about blockchain, read a bit about it, and thought: this is interesting stuff. So I came up with this idea: blockchain for Bloxwich," said Hughes. As with every bubble, whether it's Tulip Mania or the Californian Gold Rush, most investors lose their shirts while a fortune is being made by associated services – the advisors and marketeers can bank their cash, even if there's no gold in the river.


Building Resilient Data Multiclouds


Resilience is risk mitigation that is engineered into all your IT assets. It’s the confidence that your infrastructure won’t fail you ever, especially in times of crisis. Resilience that’s baked into the True Private Cloud ensures that businesses can weather those once-in-a-lifetime “black swan,” “perfect storm,” and other disruption scenarios that can put them and their stakeholders out of business permanently. To evolve toward this resilience architecture, enterprises must take steps to ensure that their migration of data, analytics, and other IT infrastructures to the cloud environments are comprehensively resilient. The path to the True Private Cloud requires a keen focus on building unshakeable resilience into distributed data assets. Management information systems and enterprise data warehouse systems have historically been viewed as “second-class citizens” among IT infrastructure platforms. Occasional failures of these systems were viewed as tolerated events.


Global Financial Services Bullish On AI, The 'Disruptive Tech' Frontrunner


AI and its application through machine learning is being increasingly used to automate processes such as credit decision-making and customer interaction as well as help detect fraud, money laundering and even terrorist activity. Capital markets-focused organizations such as investment banks are the furthest down the road in the financial services industry in adopting new disruptive technologies, with a little over half (51%) saying that AI, ...  and just 17% among those in the private wealth industry. Stephanie Miller, Chief Executive Officer of Intertrust, commenting in the wake of the findings said: “With the hype surrounding disruptive technology in the financial sector it is easy to lose sight of reality. The findings from this study suggest that while the industry is positive towards new technology such as AI, blockchain and robotics, only a minority of firms are currently putting it to use and the speed of travel remains cautious.”


Geospatial Data Brings Value Across Industries

Geospatial Data Brings Value Across Industries
Though it may seem like a highly technical concept, most people use some type of geospatial data system every day because such programs are used to route Uber drivers, assess credit risk and lending rates based on zip code, and determine insurance rates by identifying homes at risk of flooding, earthquakes, and other natural disasters. Even kids use geospatial data to play games like Pokemon Go. Geospatial information is everywhere and, in a world where everyone is attached to a smartphone, we’re constantly connected to it. Put simply, geospatial data just means that the information set is tied to zip codes, addresses, or coordinates, among other possibilities. It’s a map or an address book, reinterpreted for a digital ecosystem. Though there are plenty of groups building geospatial data sets, one of the factors that has most contributed to this new digital world is the availability of open data sets.


Built for realtime: Big data messaging with Apache Kafka

big data messaging system / information architecture / mosaic infrastructure
Apache Kafka's architecture is very simple, which can result in better performance and throughput in some systems. Every topic in Kafka is like a simple log file. When a producer publishes a message, the Kafka server appends it to the end of the log file for its given topic. The server also assigns an offset, which is a number used to permanently identify each message. As the number of messages grows, the value of each offset increases; for example if the producer publishes three messages the first one might get an offset of 1, the second an offset of 2, and the third an offset of 3. When the Kafka consumer first starts, it will send a pull request to the server, asking to retrieve any messages for a particular topic with an offset value higher than 0. The server will check the log file for that topic and return the three new messages. The consumer will process the messages, then send a request for messages with an offset higher than 3, and so on.


Expert Excuses for Not Writing Unit Tests

Book Cover
Many studies do show a correlation between LoC and the overall cost and length of development, and between LoC and number of defects. So while it may not be a precise indication of progress, it is not a completely useless metric. The lower your LoC measurement is, the better off you are in terms of defect counts. For a tool to calculate this for you try https://github.com/boyter/scc/ which will also give you a COCOMO estimation. Be sure to run it over projects that have tests and see how much additional cost the tests add. Do this internally if you can with projects that have tests and point out that the tests add some percentage of cost. If you can cherry pick projects to make this look worse the better off you will be. If someone challenges that the project with tests was more successful point out using the same model that the project cost more. More money spent means more quality to most people. If you mix metaphors and ideas here you can also impress and confuse people to the point they will be afraid to challenge you further. Be sure to point out that adding tests means writing more code which takes longer, which also impacts cost. Also be sure to point out that while tests are being written that nobody will be fixing bugs. This is usually enough of an argument to stop everything dead in its tracks.


Confused by AI Hype and Fear? You’re Not Alone

hand raised in the middle of a wheat field
Although AI leaves the door open for other paths to machine intelligence, most advances towards this goal so far have been made using machine-learning algorithms. These have some key characteristics that separate them from other algorithms, and that will define the field if another route to AI is discovered in the near future. Machine learning is primarily concerned with algorithms that can make connections between various annotated data and their output. Crucially, they are also able to learn independently from new, varied output, thereby improving their models without the need for human intervention. This approach lends itself to many of AI’s defining use cases, such as computer vision and machine translation. It’s debatable whether any AI applications to date haven’t derived from machine learning in some way. Almost all current chatbots have been built by machine learning, but there is another approach that some data scientists are considering. Rule-based models are founded on linguistic systems that are developed by experts to imitate the ways humans structure their speech.


Man-in-the-disk attacks: A cheat sheet

Cue a recent discovery by researchers at the software research firm Check Point: An attack they dubbed "man-in-the-disk" (MITD) attacks, which exploit a weakness in Android's handling of external storage to inject malicious code. The exploit allowing MITD attacks has serious repercussions for Android users because it exists at a level that's integral to Android's design. If man-in-the-disk sounds similar to man-in-the-middle (MITM) attacks, it's because there are many ways in which the attacks are similar. Both involve intercepting and often modifying data for nefarious purposes--it's simply the scale that distinguishes between the two attacks. Check Point's researchers found a number of apps--including some from major distributors such as Google--that were vulnerable to MITD attacks. Researchers also managed to build their own apps that took advantage of the exploit.


Want A Bigger Bang From AI? Embed It Into Your Apps


A key element of application-centric AI: Context. Say a sales executive wants to call on important customers in several cities. AI can review the accounts and predict which customers might increase business after a sales call, based on past history, and suggest an itinerary that would maximize ROI from the trip. One common factor in all those buckets is that integrating AI and machine learning into applications lets the app take some type of action automatically. Automation allows many tasks to be performed without human intervention—and without human error, says Swan. AI systems can execute relatively straightforward actions, such as booking a rental car for that sales trip. They can also tackle harder tasks that normally require not only time, but also some level of expertise, such as optimizing business workflows, reviewing financials for anomalies, or finding expense report violations. Often there’s still a human review, but that review can often be done faster, and more accurately, with the AI’s assistance in laying all the groundwork, presenting recommendations, and providing the background, documentation, and reasoning behind those recommendations.


Why open standards are the key to truly smart cities


In collaboration with several partners, including The Open Group, academic institutions and industry players, bIoTope is running a series of cross-domain smart city pilot projects which will provide proofs-of-concept for a wide range of applications, including smart metering, smart lighting, weather monitoring, and the management of shared electric vehicles. These projects will reveal the benefits that can be realised through the use of IoT technology, such as greater interoperability between smart city systems. They will also deliver a much-needed framework for security, privacy and trust to facilitate responsible access to, and ownership of, data on the IoT. Ultimately, bIoTope will deploy smart city pilots in Brussels, Lyon, Helsinki, Melbourne and Saint Petersburg. It is hoped that these pilot schemes will showcase the sustainable business ecosystems that will generate value to end users, solution providers, municipalities and other stakeholders.



Quote for the day:


"Risk more than others think is safe. Dream more than others think is practical." -- Howard Schultz


Daily Tech Digest - November 30, 2018

Man-in-the-middle attacks: A cheat sheet

cybersecurityistock-952069328utah778.jpg
The concept behind a man-in-the-middle attack is simple: Intercept traffic coming from one computer and send it to the original recipient without them knowing someone has read, and potentially altered, their traffic. MITM attacks give their perpetrator the ability to do things like insert their own cryptocurrency wallet to steal funds, redirect a browser to a malicious website, or passively steal information to be used in later cybercrimes. Any time a third party intercepts internet traffic, it can be called a MITM attack, and without proper authentication it's incredibly easy for an attacker to do. Public Wi-Fi networks, for example, are a common source of MITM attacks because neither the router nor a connected computer verifies its identity. In the case of a public Wi-Fi attack, an attacker would need to be nearby and on the same network, or alternatively have placed a computer on the network capable of sniffing out traffic.


Technical Debt Will Kill Your Agile Dreams

Bad engineering decisions are in a different category to ones that were tactically made with full knowledge that the short-term priority was worth it. When it's clear that such a decision was, in fact, a tactical decision, it is much easier to convince people that refactoring needs to happen and the debt has to be paid off. Unfortunately, when the term is used as a polite way of saying bad engineering, it's unlikely there is any repayment strategy in place and it is even harder to create one because first, you need to convince people there is some bad engineering, then you need to convince people it is causing problems, then you have to consider a better approach and then convince various stakeholders of that too. Finally, you need to convince the investment is needed to refactor. It is like trying to win 5 matches in a row away from home when you don't even have your best players. 


3 Keys to a Successful “Pre-Mortem”


The concept of a pre-mortem has been around for years, but only recently have we seen it pick up speed in the engineering community. This is an activity which is run before starting on a big stage in a project, but after doing a product mapping and prioritization activity. Rather than exploring what went wrong after the fact and what to do differently in the future, the goal of a premortem is to identify potential pitfalls and then apply preventative measures. It’s a great idea, but for those new to the concept, it’s easy to overlook some important aspects of the process. To talk about what might go wrong is scary. It acknowledges many things are out of our control, and that we might mess up the things which are within our control. To talk about what might go wrong, and how to adapt to it, acknowledges the possibility of failure. As this is a rare thing in industry, if done initially outside of a structured activity, this can seem like trying to weasel your way out of work.



12 top web application firewalls compared

AWS WAF by itself does not offer the same sort of features you could expect from other solutions on this list, but coupled with other AWS solutions AWS WAF becomes as flexible as any competing solution. Existing AWS customers will see the most value in selecting AWS WaF due to the architecture benefits of staying with a single vendor. ... Each architecture comes with its own set of pros and cons, varying from the simplicity of the SaaS option to the fine-grained control over configuration and deployment with the appliance-based offerings. Barracuda’s various configurations offer very similar functionality, though there are some differences here and there. Server cloaking limits the amount of intel a potential attacker can gain on your configuration by hiding server banners, errors, identifying HTTP headers, return codes, and debug information. Server cloaking is available on all versions of the web application firewall, as is DDoS protection.


Creating a Turing Machine in Rust


A Turing machine is a mathematical model of computation that reads and writes symbols of a tape based on a table rules. Each Turing machine can be defined by a list of states and a list of transitions. Based on a start state (s0), the Turing machine works its way through all the states until it reaches a final state (sf). If no transition leads to the final state, the Turing machine will run ‘forever’ and eventually run into errors. A transition is defined by the current state, the read symbol at the current position on the tape, the next state and the next symbol that must be written to the tape. Additionally, it contains a direction to determine whether the head of the tape should move the left, right or not at all. To visualize this process, let’s take a look at a very simple Turing machine that increments the value from the initial tape by one. ... While this is a very simple Turing machine, we can use the same model to create machines of any complexity. With that knowledge, we are now ready to lay out the basic structure of our project.


Tech support scammers are using this new trick to bypass security software

Symantec describes this kind of attack technique as 'living off the land', whereby attackers exploit legitimate features in systems to hide malicious activity. In of itself obfuscation isn't malicious, but it can be used for malicious purposes. "There are many open source tools to obfuscate code as developers don't want their code to be seen by the users of their software. Similar is the case with encryption algorithms like AES. Such algorithms have wide usage and implementations in the field of data security," said Siddhesh Chandrayan, threat analysis engineer at Symantec. "Both these mechanisms, by themselves, may not generate an alarm as they are legitimate tools. However, as outlined in the blog, scammers are now using these mechanisms to show fake alerts to the victims. Thus, scammers are 'living off the land' by using 'inherently non-malicious' technology in a malicious way," he added.


Standout predictions for the cloud – a CTO guide

Standout predictions for the cloud – a CTO guide image
“Many businesses have previously shied away from true multi-cloud deployments by favouring public infrastructures due to the perceived expense of private platforms, rooted in the required expertise necessary to run them. However, recent technological developments that enable businesses to take a highly-automated approach have shown that this is now an outdated view of cloud infrastructure. When it comes to transforming with cloud technologies, multi-cloud is proving itself to be the correct endgame for businesses in all industries.” ... “Enterprises are eliminating all the “state” from their endpoint devices, where any changes are stored only temporarily on the device and are quickly and efficiently on-ramped to the organisation’s cloud. “One key benefit, aside from IT efficiency gains, is that it represents an elimination of the “dark data” that was previously stored in employees’ laptops or desktops. Suddenly, all this “dark” data is right at your fingertips – stored in the cloud– as a searchable, analysable and shareable repository.”



Typemock vs. Google Mock: A Closer Look

Writing tests for C++ can be complicated, especially when you are responsible for maintaining legacy code or working third-party APIs. Fortunately, the C++ marketplace is always expanding, and you have several testing frameworks to choose from. Which one is the best? In this post, we'll consider Typemock vs. Google Mock. We'll use Typemock's Isolator++ and Google Mock, the C++ framework that is bundled with Google Test, to write a test function for a small project. As we implement the tests, we'll examine the difference in how the frameworks approach the same problem. ... Fowler defines an order object that interacts with a warehouse and mail service to fill orders and notify clients. He illustrates different approaches for mocking the mail service and warehouse so the order can be tested. This GitHub project contains Fowler's classes implemented in C++ with tests written in Google Mock. Let's use those classes as a starting point, with some small changes, for our comparison.


Caching can help improve the performance of an ASP.NET Core application. Distributed caching is helpful when working with an ASP.NET application that’s deployed to a server farm or scalable cloud environment. Microsoft documentation contains examples of doing this with SQL Server or Redis, but in this post,I’ll show you an alternative. Couchbase Server is a distributed database with a memory-first (or optionally memory-only) storage architecture that makes it ideal for caching. Unlike Redis, it has a suite of richer capabilities that you can use later on as your use cases and your product expands. But for this blog post, I’m going to focus on it’s caching capabilities and integration with ASP.NET Core. You can follow along with all the code samples on Github. ... No matter which tool you use as a distributed cache (Couchbase, Redis, or SQL Server), ASP.NET Core provides a consistent interface for any caching technology you wish to use.


7 reasons why artificial intelligence needs people


As AI projects roll out over the next few years, we will need to rethink the definition of the “work” that people will do. And in the post-AI era the future of work will become one of the largest agenda items for policy makers, corporate executives and social economists. Despite the strong and inherently negative narrative around the impact on jobs, the bulk of the impact from the automation of work through AI will result in a “displacement” of work not a “replacement” of work – it’s easy to see how the abacus-to-calculator-to-Excel phenomenon created completely new work around financial planning and reporting, and enterprise performance management. Similarly, AI will end up accelerating the future of work and resulting displacement of jobs will be a transition already in place, not an entirely new discussion. As some work gets automated other jobs will get created, in particular ones that require creativity, compassion and generalized thinking.



Quote for the day:


"A single question can be more influential than a thousand statements." -- Bo Bennett


Daily Tech Digest - November 29, 2018

Closing the Awareness Gap in Technology Projects


The symptoms of a problem with operational awareness can vary. Sometimes you fail to obtain visibility at the level of accuracy you need; sometimes you get that visibility, but don’t know how to act on it; sometimes, even when insights lead to actions, these actions fail to lead to your desired results. If you’re trying, for example, to reduce time delays, your data analytics might show which parts of your project are moving more slowly than expected, but they’re unlikely to pinpoint the precise reason. Problems in one place might be the result of decisions made several steps back in the supply chain or project life cycle. Was planning off? Did procurement write a poor contract? Maybe your workers lack the necessary skills? The experience of using the system may also make it difficult for you and your employees to make sense of the data effectively. For example, in our work with boards of directors, who are taking a growing role in overseeing high-value projects, we sometimes observe members relying heavily on dashboards or documents developed with sophisticated data analytics. 



Three steps toward stronger data protection

Applications responsible for originally sourcing data into the system or modifying data as part of business transactions should also be responsible for digitally signing data before persisting them into databases. Any application retrieving such data for business use must verify the digital signature before using the data or refuse to use data whose integrity has been compromised. These are concrete steps companies can begin to take immediately to protect themselves. Enabling FIDO within a few weeks into web applications is now possible; incorporating encryption with secure, independent key management systems into applications can be accomplished within a few months. Integrating digital signatures may be accomplished at the same time as encryption or pursued as a subsequent step. By enabling these security controls, companies place themselves far, far ahead of where the vast majority of attacks currently occur.


Google Faces GDPR Complaints Over Web, Location Tracking
Even though Location History is off by default, Google appears to encourage its users to turn it on through overly simplified and carefully designed user interfaces that may drive users to hit "approve." In contrast to the ease of enabling the feature, any user who wants to research what their choice might mean must undertake extra clicks or explore multiple submenus, Forbrukerrådet's report contends. These design choices may contradict GDPR's requirement for "specific and informed" consent, Forbrukerrådet says. "Users will often take the path of least resistance in order to access a service as soon as possible," the report says. "Making the least privacy friendly choice part of the natural flow of a service can be a particularly effective dark pattern when the user is in a rush or just wants to start using the service." Forbrukerrådet contends that if users don't click on Location History at the start, Google keeps trying to get them to enable it. For example, the report contends that in order to keep location-tracking disabled, users must again decline it when trying to use Google's Assistant, Maps and Photos apps.


Data Science “Paint by the Numbers” with the Hypothesis Development Canvas

The one area of under-invested in most data science projects is the thorough and comprehensive development of the hypothesis or use case that is being tested; that is, what it is we are trying to prove out with our data science engagement and how do we measure progress and success.  To address these requirements, we developed the Hypothesis Development Canvas – a “paint by the numbers” template that we will populate prior to executing a data science engagement to ensure that we thoroughly understand what we are trying to accomplish, the business value, how we are going to measure progress and success, what are the impediments and potential risks associated with the hypothesis. The Hypothesis Development Canvas is designed to facilitate the business stakeholder-data science collaboration


6 Tips To Frame Your Digital Transformation With Enterprise Architecture


Call it digital transformation strategy—call it smart business—enterprise architecture is a method your company can use to organize your IT infrastructure to align with business goals. This isn’t a new concept. In fact, enterprise architecture has been around since the 1960s. But the overwhelming presence of tech in every facet of business today has forced us to rethink it, and to make it a more central focus of business management. ... Enterprise architecture deals with your organizational structure, business model, apps, and data just as much as it does information technology. When you put it together, you need to think from an employee perspective, a customer perspective, and from the perspective of meeting your business goals. After all, your digital transformation will impact your entire company, and your enterprise architecture will need to support it. Your enterprise architecture is of no use to anyone if no one but IT geeks can understand it. When you develop it, use common language. Create easy-to-understand examples.


Machine learning and the learning machine with Dr. Christopher Bishop

The field of AI is really evolving very rapidly, and we have to think about what the implications are, not just a few years ahead, but even further beyond. I think one thing that really characterizes the MSR Cambridge Research lab is that we have a very broad and multi-disciplinary approach. So, we have people who are real world experts in the algorithms of machine learning and engineers who can turn those algorithms into scalable technology. But we also have to think about what I call the sort of penumbra of research challenges that sit around the algorithms. Issues to do with fairness and transparency, issues to do with adversaries because, if it’s a publication, nobody is going to attack that. But if you put out a service to millions of people, then there will be bad actors in the world who will attack it in various ways. And so, we now have to think about AI and machine learning in this much broader context of large scale, real-world applications and that requires people from a whole range of disciplines.


Cloudlets extend cloud power to edge with virtualized delivery


With a cloudlet, there tend to be fewer users and they connect over a private wireless network. Cloudlets are also generally limited to soft-state data, such as application code or cached data that comes from a central cloud platform. In some ways, cloudlets are more like private clouds than public clouds, especially when it comes to self-management. With both cloudlets and private clouds, organizations deploy and maintain their own environments and determine the delivery of services and applications. Cloudlets also limit access to a local wireless network, whereas private clouds are available over the internet and other WANs to support as many users as necessary -- although nowhere near the number of users public clouds support. The private cloud theoretically serves users wherever they reside, whenever they need and from any device capable of connecting to the applications. In contrast, cloudlets are specific to mobile and IoT devices in close proximity.


KingMiner malware hijacks the full power of Windows Server CPUs

The malware generally targets IIS/SQL Microsoft Servers using brute-force attacks in order to gain the credentials necessary to compromise a server. Once access is granted, a .sct Windows Scriptlet file is downloaded and executed on the victim's machine. This script scans and detects the CPU architecture of the machine and downloads a payload tailored for the CPU in use. The payload appears to be a .zip but is actually an XML file which the researchers say will "bypass emulation attempts." It is worth noting that if older versions of the attack files are found on the victim machine, these files will be deleted by the new infection. Once extracted, the malware payload creates a set of new registry keys and executes an XMRig miner file, designed for mining Monero. The miner is configured to use 75 percent of CPU capacity, but potentially due to coding errors, will actually utilize 100 percent of the CPU. To make it more difficult to track or issue attribution to the threat actor, the KingMiner's mining pool has been made private and the API has been turned off.


Managing a Real-Time Recovery in a Major Cloud Outage

system fail situation in network server room
While Always On Availability Groups is SQL Server’s most capable offering for both HA and DR, it requires licensing the more expensive Enterprise Edition. This option is able to deliver a recovery time of 5-10 seconds and a recovery point of seconds or less. It also offers readable secondaries for querying the databases (with appropriate licensing), and places no restrictions on the size of the database or the number of secondary instances. An Always On Availability Groups configuration that provides both HA and DR protections consists of a three-node arrangement with two nodes in a single Availability Set or Zone, and the third in a separate Azure Region. One notable limitation is that only the database is replicated and not the entire SQL instance, which must be protected by some other means. In addition to being cost-prohibitive for some database applications, this approach has another disadvantage. Being application-specific requires IT departments to implement other HA and DR provisions for all other applications.


Reputational Risk and Third-Party Validation

Security ratings are increasingly popular as a means of selecting and monitoring vendors. But Ryan Davis at CA Veracode also uses BitSight's ratings as a means of benchmarking his own organization for internal and external uses. "Taking somebody's word for it isn't enough these days," says Davis, an Information Security Manager at CA Veracode. "You can't just say 'Oh, yeah, well that person said they're secure ..." For CA Veracode, security ratings provided by BitSight offer validation to prospective customers. "We want [customers] to be able to have that comfort that somebody else is also asserting that we're secure." In an interview about the value of security ratings, Davis discusses:
How he employs BitSight Security Ratings; The business value - internally and externally; and How these ratings can be a competitive differentiator. Davis is CA Veracode's Information Security Manager. He is responsible for ensuring the security and compliance of thousands of assets in a highly scalable SaaS environment. Davis has more than 15 years of experience in information technology and security in various industries.



Quote for the day:


"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward


Daily Tech Digest - November 28, 2018


Modern enterprises solutions are not only smarter but also require no physical infrastructure at all, making them more cost-effective than older technologies. The rise in the number of software-as-a-service (SaaS) based enterprise management products has consequently helped more and more entrepreneurs build digitized enterprises through the use of simple and efficient products. SaaS-based cloud application services do not require any storage, servers, databases, but offer greater capabilities such as inter-operability and easier customization. This allows service providers to integrate advanced technologies such as artificial intelligence, machine learning, data mining and analytics, etc. into their enterprise systems and business processes to unlock higher levels of productivity like never before.



The 10 most in-demand tech jobs of 2019

The tech jobs landscape of 2019 will likely look largely the same as it did in 2018, with roles in software development, cybersecurity, and data science dominating across industries. "Emerging technologies will be key catalysts for the in-demand jobs we expect to see in 2019," said Sarah Stoddard, community expert at job search site Glassdoor. "From artificial intelligence, automation, virtual reality, cryptocurrency and more, demand for jobs in engineering, product, data science, marketing and sales will continue to rise in order to support the innovation happening across the country." More and more often, traditional companies are beginning to resemble tech companies, and this trend will likely continue throughout the next year, Stoddard said. "As employers across diverse industries, from health care to finance to automotive and more, continue to implement various technologies to streamline workflows and boost business, the demand for top-notch workers who have a balance of technical and soft skills will continue to rise."


GDPR is encouraging UK IT directors to pay cyber ransoms


The Sophos study revealed that small businesses were least likely to consider paying a ransomware demand, with 54% of IT directors at UK companies with fewer than 250 employees ruling out paying their attackers, while just 11% of directors at companies with 500-750 employees said they would opt for this approach. The study, based on more than 900 interviews conducted by market research firm Sapio Research, also showed that UK IT directors are significantly more likely to pay up than their counterparts in other Western European countries. Of the five European countries studied, Irish IT directors were the least likely to pay. Just 19% said they would “definitely” be willing pay a ransom rather than a larger fine. IT directors in France, Belgium and the Netherlands were also less likely to pay a ransom, with only 33% of respondents in France, 24% in Belgium and 38% in the Netherlands saying they would “definitely” be willing to pay.


New Hacker Group Behind 'DNSpionage' Attacks in Middle East

"It's clear that this adversary spent time understanding the victims' network infrastructure in order to remain under the radar and act as inconspicuous as possible during their attacks," the Talos report noted. The new campaign is the second in recent months targeting Middle East organizations and is a sign of the recently heightened interest in the region among cyberattackers. In September, Check Point reported on new surveillance attacks on law enforcement and other organizations in Palestine and other Middle East regions by a group known as Big Bang. A Siemens report from earlier this year described organizations in the oil and gas sectors in the Middle East particularly as being the most aggressively targeted in the world. Half of all cyberattacks in the region are targeted at companies in these two sectors. According to Siemens, a startling 75% or organizations in these sectors have been involved in at least one recent cyberattack that either disrupted their OT network or led to confidential data loss.


Cisco predicts nearly 5 zettabytes of IP traffic per year by 2022

Cisco predicts nearly 5 zettabytes of IP traffic per year by 2022
Cisco says that since 1984, over 4.7 zettabytes of IP traffic have flowed across networks, but that’s just a hint of what’s coming. By 2022, more IP traffic will cross global networks than in all prior “internet years” combined up to the end of 2016. In other words, more traffic will be created in 2022 than in the first 32 years since the internet started, Cisco says. One of the more telling facts of the new VNI is the explosion of machine-to-machine (M2M) and Internet of Things (IoT) traffic. For example M2M modules account for 3.1 percent of IP traffic in 2017, but will be 6.4 percent of IP traffic by 2022, said Thomas Barnett, director of service provider thought leadership at Cisco. By 2022, M2M connections will be 51 percent of the total devices and connections on the internet. A slew of applications from smart meters, video, healthcare monitoring, smart car communications, and more will continue to contribute to a significant growth in traffic. What that means is customers and service providers will need to secure and manage M2M traffic in new and better ways, Barnett said.


The journey to turning your organisation into a platform

A traditional organisation, which produces a product or service, can become a platform organisation that facilitates exchanges between producers, even its previous competitors, and consumers – it has swapped the means of production for the means of connection. Many platform organisations are now more valuable and durable than traditional companies. Consequently, firms and government agencies now investigate them in their annual strategy processes and innovation groups. So how do you make that journey from traditional “brownfield” organisation to one that can really benefit from the opportunity of being platform-centric? There are three phases to the journey: design, launch and grow: For traditional companies, the search for a platform business model starts outside in an emerging ecosystem, but should also relate to the value created in the existing business model, otherwise the organisation loses the potential competitive advantage of its relationships, intellectual property, products, services, domain knowledge, scale, data and so on.


Quantum Computing to Protect Data: Will You Wait and See or Be an Early Adopter?

Quantum Computing to Protect Data: Will You Wait and See or Be an Early Adopter?
One area of data protection that will be affected by quantum computing capabilities is encryption. You see, quantum computing will make current day encryption practices obsolete. The traditional Public Key Infrastructure (PKI) system used can easily come crashing down when public keys become vulnerable to attack by quantum machines. Instead of years to decipher codes, we could be down to minutes or even instantly. That changes life pretty darn dramatically. Just imagine all those security certificates issued for websites, emails and digital signatures to validate authentication becoming obsolete in a matter of minutes. We can already sense the drool from cyber criminals and adversarial nations. Here comes the “the sky is falling” talk, so here’s the disclaimer: we don’t expect this encryption calamity to happen tomorrow, but we do expect it to happen within our lifetime. It’s not unreasonable to think within a decade or so. The 10-15 year mark isn’t all too unreasonable, especially if you start taking into consideration study and standardization. But that’s the problem with any new technology: timing.


How better standards can decrease data security spending needs

Companies across a variety of industries are feeling the strain of increasingly savvy malware and other digital attacks that threaten data security – but it’s not just information that’s at risk. According to businesses, these attacks are also putting pressure on their budgets, with 92 percent of companies planning cyber security budget increases, according to a report by Enterprise Strategy Group. But can budgets keep up with growing security needs? Particularly for small businesses, the only option may be to standardize security practices to hold down costs. As in any industry, standardization makes it easier for companies to assess their needs, access appropriate tools, and can help reduce the cost of those tools overall. Data security, however, is a quickly changing field, creating a barrier to standardization. Recently, though, standardization at the highest levels, specifically starting with the federal government, has opened new doors for companies seeking cyber security solutions that don’t cost a fortune and work better than current approaches.


The need for data literacy

The need for data literacy
Thas been an explosion in the data available for decision making – marketing is no different. In fact, many would argue that being able to understand data, in particular customer data is now critical to success. For marketing to be truly successful marketers need to put the customer at the heart of everything, from the initial product or service design right through to delivery and after purchase support, therefore having a clear understanding of customer data at each critical point is a necessity. Because data is now so important it is often referred to ‘as the new oil’ or ‘the universal language of this fourth industrial revolution’. What is for sure is that the modern marketer needs to be able to ask questions of machines and use data to build knowledge, make decisions and communicate its meaning with board members or stakeholders. The ability to translate data into useable information that can drive and articulate more meaningful campaigns to audiences is a key skill for modern marketers.


Sentiment Analysis: What's with the Tone?


A typical use case is feedback analysis. Depending on the tone of the feedback — upset, very upset, neutral, happy and very happy — the feedback takes a different path in a support center. Sentiment analysis is indeed widely applied in voice of the customer (VOC) applications. For example, when analyzing responses in a questionnaire or free comments in a review, it is extremely useful to know the emotion behind them in addition to the topic. A disgruntled customer will be handled in a different way from an enthusiastic advocate. From the VOC domain, the step to applications for healthcare patients or for political polls is quite short. Similarly, the number of negative vs. positive comments can decide the future of a YouTube video or a Netflix movie. How can we extract sentiment from a text? Sometimes even humans are not that sure of the real emotion when reading between the lines. Even if we manage to extract the feature associated with sentiment, how can we measure it?



Quote for the day:


"An entrepreneur without funding is a musician without an instrument." -- Robert A. Rice Jr


Daily Tech Digest - November 27, 2018

Mass data fragmentation requires a storage rethink
It’s been estimated that up to 60 percent of secondary data storage is taken up by copies, needlessly taking up space and cost and raising risk. Worse, there is no re-purposing of the data for other use cases, such as test/develpment (where frequent copies of data are made for developers to test or stage their apps) or analytics (where data is copied and centralized in a lake or warehouse to run reports against). Today’s distributed, mobile organizations and easy access to cloud services mean there are more options than ever for data to be stored in multiple locations – perhaps without IT’s knowledge or control. And with the advent of edge computing and the Internet of Things (IoT), some data will never move from its edge location but will need to be managed in situ, away from conventional infrastructure and control. The specialized and siloed nature of secondary infrastructure and operations means IT is burdened with extra Opex and organizational overhead just to "keep the lights on," as well as extra cycles for coordination across functions to meet SLAs, recover from failures, manage upgrade cycles, troubleshoot support issues, and so on.



How to avoid the coming cloud integration panic

Enterprises typically don’t think about data, process, and service integration until there is a tactical need. Even then, they typically get around the issues by pulling together a quick and dirty solution, which typically involves FTP, a file drop, or even Federal Express. The result of all this is that a lot of integration between the cloud and on-premises systems remains undone, be it data integration, process integration, or service integration. This will become a crisis in 2019 for many enterprises, because they can spend the entire year, or more, just pulling together integration solution for their public cloud systems—which they now depend on for some mission-critical processes. To avoid that crisis, here’s what you need to do. First, catalog all data, services, and processes, using some sort of repository to track them all.. You need to do this for all on-premises systems and all public cloud systems, and you need to do so with the intent of understanding most of the properties so you can make sure the right things are talking to the right things.


TLA calls on tech industry to hire one million tech workers by 2023


TLA suggested increasing the amount of funding for female-founded businesses to increase diversity in the city’s tech sector, and recommended encouraging women to join investment firms to push up the likelihood of funding for female-led firms. Linda Aiello, senior vice-president of international employee success at Salesforce, said the “cognitive diversity” of teams created by having a mix of talent will help firms to better reflect their customers, and considering diversity in the tech industry is not only becoming “increasingly important” for product design, but should be considered at all levels of a company. “The technology sector, like almost every other industry, faces a diversity gap,” she said. “This is an issue that’s felt across all organisations and all sectors and it crosses so many threads from gender and race to religion, sexuality and socio-economic backgrounds – each of which contributes to the cognitive diversity of a team.” 


Researchers Use Smart Bulb for Data Exfiltration

For their experiment, the researchers used the Magic Blue smart bulbs, which work with both Android and iOS, and which rely on Bluetooth 4.0 for communication. The devices are made by a Chinese company called Zengge, which claims to be a supplier for brands such as Philips and Osram.  The bulbs are marketed as supporting Bluetooth Low Energy (Bluetooth LE or Bluetooth Smart) and the researchers focused on those using the Low Energy Attribute Protocol (ATT). Some of the bulbs are only Bluetooth Smart Ready, the researchers said.  The bulbs use Just Works as pairing method, which allowed Checkmarx to sniff the communication with the mobile application used for control. The Android application, the company discovered, works with other bulbs that have the same characteristics as well.  The researchers paired the mobile phone running the iLight app with the smart bulb and started controlling the device, while also attempting to capture the traffic.


How to implement Enterprise DevOps: 5 steps

istock-881484354.jpg
Under a traditional IT operating model, there are generally too many handoffs between teams, said John Brigden, vice president of Amazon Web Services (AWS) Managed Services, during a Monday session at AWS re:Invent 2018. "You've got lots of handoffs when a change is made, or any kind of adjustment is made to the environment ... and that can result in loss of innovation, loss of speed, and a lot of other challenges the enterprise faces today," Brigden said during the session. The notion of DevOps and DevOps teams in general can also be flawed, he added. "You might have tens, even hundreds of DevOps teams in your environment, and if these DevOps teams are left to figure everything out for themselves—network configuration, security compliance, compliance with PCI, change management, automation, in addition to writing the application to achieve their business outcome —you can get to a place where you have a lot of non-standardization, a lot of complexity, and perhaps create an environment that could slow down what you're really trying to achieve," Brigden said.


Weren’t algorithms supposed to make digital mortgages colorblind?

Some online lenders, such as Upstart (which does not offer mortgages), have said their algorithms help reduce the cost of credit and give more people offers at better pricing than traditional lenders. Upstart uses “alternative” data about education, occupation and even loan application variables in its underwriting models. (For instance, people who ask for round numbers like $20,000 are a higher risk than people who ask for odder numbers like $19,900.) “A lot of variables that tend to be correlated with speed or lack of prudence are highly correlated with default,” Upstart co-founder Paul Gu said in a recent interview. “And indications that someone desperately needs the money right away will be correlated with defaults.” Such factors are less discriminatory than relying on FICO scores, which correlate to income and race, according to online lender. But in the mortgage area, it appears that bank and fintech lenders are baking traditional methods of underwriting into their digital channels.


It’s complicated: How enterprises are approaching IAM challenges


IAM is all of these things and more – and for those running security in the enterprise, it is clear that living with the multiplicity of IAM is par for the course because IAM is more than just identity provisioning or access governance or single sign-on (SSO) or any one of a long list of disciplines. The success, or otherwise, of identity management in companies today relies on moving from singular and isolated technical initiatives to a full IAM programme – or at least having a plan for such a journey. “If you had to single out a sector at the cutting edge of IAM, it’s financial services,” says Martin Kuppinger ... “That’s because finances need good protection – and regulators and the sector itself have long required secure digital identities and standardised processes. Yet that’s only one part of the IAM story now, because next to this security-first identity agenda is a parallel consumer-convenience move being driven by the large digital companies that are developing a different kind of expertise in consumer identity management.”


Pattern Recognition and Machine Learning

Download Bishop Pattern Recognition and Machine Learning 2006
This leading textbook provides a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners. No previous knowledge of pattern recognition or machine learning concepts is assumed. This is the first machine learning textbook to include a comprehensive coverage of recent developments such as probabilistic graphical models and deterministic inference methods, and to emphasize a modern Bayesian perspective. It is suitable for courses on machine learning, statistics, computer science, signal processing, computer vision, data mining, and bioinformatics. This hard cover book has 738 pages in full colour, and there are 431 graded exercises. Solutions for these exercises and extensive support for course instructors are provided on Christopher Bishop’s page. Now available to download in full as a PDF.


Hiring tips: 9 secrets to working with IT recruiters

Hiring tips: 9 tips for working with IT recruiters
You can’t expect recruiting professionals, whether internal or external, to find the best talent if you’re not one hundred percent honest and open about the available role or roles, what you’re looking for, your timeline, what you’re willing to pay and the amount of competition for the vacancy, says Mondo’s Zafarino. “One thing that is key from the recruiter’s perspective is having full transparency from the CIO or IT hiring manager,” Zafarino says. “If there are internal candidates in the running, too; if you’re using other agencies as well, that’s fine. But you must communicate this to your recruiting partner. Let them know where your budget approval stands, or if you’re still working on getting the resources. And the most important thing is allocating the right amount of time for recruiters to fill the need. If it’s an urgent need, we’ll go full steam ahead, but if it’s a more passive potential hire then we’ll reallocate sources according to your needs and where you’re at in the process.”


Great Scrum Masters Are Grown, Not Born


Here's my assertion: Scrum Masters are Agile Coaches because they do what Agile Coaches at the program level do; they just do it within the scope of one or a few teams. They need all the skills and self-leadership that Agile Coaches at the program level need to be really effective for the teams they serve.  I am part of the working group ICAgile commissioned to refresh the Learning Path for Agile Coaching which was released earlier this year. When we got together, one of the main things we wanted to adjust in the community at large was this notion that a Scrum Master is somehow a less powerful role than Agile Coach or that it's even an administrative role that does not require a lot of skill. These were damaging applications of the roles that we saw across the industry. It resulted in stunted Scrum Masters who were not allowed to develop the skills needed to really help teams not only deliver, but deliver while improving team capabilities. The people on the ground need a full complement of skills because on the ground, with teams, day in and day out, is where the action is.



Quote for the day:


"Leadership happens at every level of the organization and no one can shirk from this responsibility." -- Jerry Junkins