Daily Tech Digest - December 03, 2018

Marriott data breach reactions
Marriott Hotels should have identified this breach through their cyber due diligence of Starwood in 2016 when it acquired the company. As result of buying a breach they will face a number of challenges at a board level around the levels of governance and diligence within the business. Had it performed a detailed compromise assessment as part of its due-diligence activity, the organisation’s board would have been informed of the breach and been able to make a decision based on risk or put other warranties in place.Since the compromise started in 2014, the breach doesn’t fall under the remit of GDPR. However, the fallout would be incredibly severe under this regulation, and therefore any organisation looking to undergo an M&A deal now or in the future should learn from this example and ensure a comprehensive cyber security and compromise assessments are carried out to inform their understanding of risk.


What's the best way to make the most of the cloud?

That ability to scale is critical to the organisation. Six years ago, the business had clear peaks in traffic -- the launch of its world-famous book of records every September and Guinness World Records Day in November. Today, GWR is less reliant on publishing and operates more like a digital consultancy and its traffic peaks are unpredictable. Howe gives an example. "On the first day we went live with the new AWS infrastructure, there was a press release for the largest unlimited wave surfed by a woman," he says. "It was huge news in the surfing community and within a few hours we'd received four times our normal daily web traffic. Yet we were able to meet that demand comfortably by just turning on the auto-scaling capability of the cloud." As well as scalability and flexibility, Howe says the cloud provides other benefits. "It allows us to be more dynamic as a team and to think more carefully about where we should focus our attention," he says. "It gives us better transparency in terms of costs, too."


How to buy SD-WAN technology: Key questions to consider when selecting a supplier

question marks
The first strategic choice is deciding what kind of partner you want to deploy and support your SD-WAN architecture. IT organizations can work directly with the leading SD-WAN technology providers and their channel partners, or purchase a managed SD-WAN service from a service provider such as AT&T, Verizon, CenturyLink, Comcast and many others. Most organizations will benefit from an experienced channel partner to integrate SD-WAN into their existing branch/WAN infrastructure, which may include routers, WAN optimization appliances, firewalls and other network security elements. Many organizations will want to outsource SD-WAN technology and related bandwidth decisions to a managed service provider. Organizations that plan to implement an internally developed (non-managed) SD-WAN solution need to examine several key issues for deployment. These include a review of their branch WAN/LAN architecture, WAN bandwidth requirements and providers, and, of course, selecting an SD-WAN technology.


Prepare for Takeoff: The Future of ERP

A truism dating back to the ancient Greeks notes that luck is when opportunity meets preparation. It's hard to argue that Plattner got lucky with his vision. Rather, it's a case of timing really being everything. After all, a complete ERP redesign should take roughly a decade. That timeline matters a great deal now in light of converging forces in the business world. On the one hand, global competition puts greater emphasis on operational efficiency. With margins tightening, companies need to optimize their processes, and especially, the customer experience. A real-time ERP can play a major role in accomplishing both of these objectives. Another major driver right now is regulation. There are numerous regulatory changes across several industries that are currently putting pressure on organizations to get much more visibility into issues like revenue recognition, privacy and process. In all of these scenarios, a real-time ERP can play a significant role in helping organizations not only stay compliant, but also reinvent critical business processes.


New forms of governance needed to safely and ethically unlock value of data


“Legally speaking, if we’re going to be setting up data trusts with massive amounts of data and serious risks in terms of data security and the implications on people’s privacy, questions about consent, we’re going to have to have data protection impact assessments,” she added. In terms of how decisions are made about data usage, the trusts could potentially help strike a balance between purely giving organisations control, which could encourage monopolistic behaviour and further entrench the power imbalance, and purely giving individuals control, which would require significant effort on their part to manage the vast amounts of data held on them. “I think the notion that an individual wants to literally manage large amounts of data about themselves is a strange sort of idea,” said Roger Taylor, chair of the Centre for Data Ethics and Innovation, which was set up by the DCMS in June 2018 to help create ethical frameworks for the use of emerging technologies.


Google Assistant and Smart Display get several new features for the holidays

The "Pretty Please" feature headlines Google's list. With it enabled, saying "Please" with your Assistant commands can now produce some unique responses from her, like "Thanks for asking so nicely." The idea pitched last summer is that you're teaching younger users to mind their manners when asking for things. And in case Google AI ends up taking over the world like something out of a sci-fi movie, you'll have banked some good faith that may save you from Martian slave pits and re-education camps. Pretty Please is enabled now in the app, and for Google's smart speakers and smart displays. There's no setting to enable or disable -- the Assistant just has additional responses when you ask nicely. Google's also added the long-awaited ability to create and manage lists with the Assistant, instead of having to use the Keep Notes app (Android, iOS) separately. The company says that it will be adding Keep Notes integration soon, as well as support for Any.do, Bring!, and Todoist.


From Warfare to Outsourced Software Development


Customers are not enemies and projects are not really born as conflicts, but in order to wage your “friendly” attack on your customer, you need to know enough about his territory and capacity. This fore-knowledge is what we call Project Intelligence. There is a Reconnaissancemission needed here. Never take it as a spying mission, it’s just a matter of ethical deep observation of things as they are naturally exposed to us by the customer. But who are the agents who will do it for you? Just look around you. Sales and presales are the spearhead in winning the battle of a new contract. Before the win takes place, they spend some good time on the customer’s territory setting up connections, knowing things and people and striving to make it happen: Contracting. They, in this way, can be exposed to quite a good deal of information not only on the pure business grounds, but also on the business politics of the new front, its weaknesses and strengths.


Cloud investments should be to boost agility, not cut costs

Since cloud implementation, they have reached several new groups they wouldn’t have been able to beforehand, enabling better two-way communication and capturing all consumer interaction data. From a developer’s point of view, cloud adoption has been instrumental, as they are able to create new functionality on the fly and deploy it easily and economically to their users. This scenario is repeated in other industries and enterprises, such as Cepsa, the second largest petroleum and chemical company in Spain. Cespa had already integrated new technologies into its operations, but it still needed to reinvent and streamline processes, something that it was finally able to accomplish by using iOS mobile apps built on an open cloud platform. Now, these apps allow service station workers to anticipate their own needs and place new supply orders with a single click. Moreover, direct sales representatives also benefit because they can manage every customer interaction on their smartphones, speeding up order processing, approval and fulfilment.


AWS Aims to Speed and Simplify Robot Development

RoboMaker’s cloud extensions are important because they enable robots to do far more than they could using only local resources. The cloud extensions are written as ROS packages so developers familiar with ROS can easily embed them in their applications. Zhu said Amazon currently has five integrated services including Amazon Lex and Amazon Polly which enable developers to add natural language conversation capabilities to their robots, the Amazon Kinesis data streaming service, the Amazon Rekognition service for facial recognition and object detection, and the AWS Cloud Watch service for near real-time live streaming of telemetry and log data for monitoring individual robots and fleets. RoboMaker also includes a simulation environment, which is necessary because not all developers working on robotic applications have access to the target device.


How the cybercrime and cyberwar landscape is constantly changing

To the average person is pretty unlikely that a state-backed hacker is going to come after you, unless you're a really high-value target. To the average person it's quite a rare kind of a risk. Obviously if you are, I don't know, working in aerospace or biotech or robotics, one of those kind of companies, then there's a reason or chance that someone's going to try and hack your systems to steal your intellectual property or just cause trouble. In terms of the bigger risk, so clearly down the line there's a lot of worry about cyber warfare that hackers could actually break into things like power systems or banks and cause chaos that way. That's clearly a huge risk, but the likelihood is very low. What's going to happen day to day is you're more likely to run into a scammer or maybe get ransomware on your PC or something like that. Those are the kind of the everyday risks, which are incredibly annoying and a real problem if suddenly your PC is encrypted and you can't get to your family photos or your work you're doing. Those are kind of everyday risks.



Quote for the day:


"Leadership matters more in times of uncertainty." -- Wayde Goodall


Daily Tech Digest - December 02, 2018

How Technology is Changing the Lending Landscape
For lenders, adoption of tech-enabled risk modelling techniques, for instance, removes limitations associated with manual credit assessment and directly translates to speedier disbursement of credit to qualifying applicants. Beyond enhancing internal processes, technology is enabling lenders to target potential applicants based on their Internet search and social media behaviour patterns and helps expand sales pipelines and reach beyond target markets. ‘Pay-as-you-use’ tech offered by new age fintech vendors greatly helps level the playing field for newer players against well-heeled larger players. The arrival of tech-enabled alternative financing platforms like ‘Peer to Peer’ and ‘New to Credit’ lending, is also increasing choice for the borrower. In the pre-tech era, borrowing was like buying groceries from the only shop in your neighbourhood. You had to buy what was available. Lenders, banks and non-banking finance companies, offered a fixed set of loan options and borrowers had no other choice but to settle for whatever was available.


Getting your Fintech Ready for Investment with haysmacintyre

For many start-ups, an ‘angel’ investor will be the preferred choice. These investors can offer extensive experience and expertise, helping the business leaders through unfamiliar territory, whilst standing back from the day-to-day management of the business. Often referred to as “patient capital”, angel investors are generally less concerned with rapid returns, supporting the business throughout its growth. Alternatively, fintech companies may turn to venture capital (VC) investment. It is, however, worth considering that the VC investor will want to exit at some point down the line, many frequently departing within five years of Series A investment. It should also be kept in mind that they may want some control over the day-to-day operation of the business, and would possibly want a position on the board. Start-up businesses often receive investment that is particularly hands-off, where the investors pay little attention to day-to-day matters. As the scale of investment increases, businesses should prepare for this dynamic to change.


Redesigning the Office App Icons to Embrace a New World of Work


Today’s workforce includes five generations using Office on multiple platforms and devices and in environments spanning work, home, and on the go. We wanted a visual language that emotionally resonates across generations, works across platforms and devices, and echoes the kinetic nature of productivity today. Our design solution was to decouple the letter and the symbol in the icons, essentially creating two panels (one for the letter and one for the symbol) that we can pair or separate. This allows us to maintain familiarity while still emphasizing simplicity inside the app. Separating these into two panels also adds depth, which sparks opportunities in 3D contexts. Through this flexible system, we keep tradition alive while gently pushing the envelope. ... To reflect this in the icons, we removed a visual boundary: the traditional tool formatting. Whereas prior Office icons had a document outline for Microsoft Word and a spreadsheet outline for Excel, we now show lines of text for Word and individual cells for Excel.


3 Signs of a Good AI Model

3 Signs of a Good AI Model
The first step in understanding how to achieve XAI is to understand what a model is and how it works. Simply stated, a model is a set of transformations that convert raw data into information, most often by applying statistics and advanced mathematical constructs such as calculus and linear algebra. What makes AI models different from traditional data transformations is that the model is constructed by employing algorithms to expose patterns from historical data; those patterns form the basis for the mathematical transformation. Traditional data transformations are most often a set of directives and rules established and programmed by a developer to achieve a specific purpose. Because AI models learn from having more data, they can be regenerated periodically to sense and adjust to changes in the underlying behaviors associated with the transformation. One of the strengths of AI is that the process of creating a model can identify patterns that are not obvious and intuitive by looking at the data.


The future of cash in Canada

The Bank of Canada staff economists considered all of this in a discussion paper released this fall called "Is a Cashless Society Problematic?" The paper cites the consistent decline of cash payments in Canada for decades. It also mentions an analysis by Forex Bonuses, which declared Canada the top country in the world embracing cashless technology. A very close second was Sweden, a country where the government is now studying how going cashless could affect the nation. The findings, referenced in the paper released by the Bank of Canada, focused on indicators such as the number of credit cards per person and the volume of cashless transactions. To save drivers time and to reduce traffic congestion, New York state switched to cashless toll booths at Grand Island, where millions of tourists travelling to Niagara Falls pass through every year. For users without a pass, the state mails the registered owner a bill. The complication is mailing bills to Canadian addresses attached to license plates. The state can't access that information, which they didn't fully consider when implementing the system.


Enterprises face these 3 challenges while adopting AI


Several barriers to AI adoption, ranging from analyzing disparate data to identifying the right AI use case to hiring the best talent, hold back companies from seizing AI opportunities. For quite some time now, there has been a lot of buzz around AI and its promise to disrupt industries altogether. From digital assistants to robotic process automation to self-driving cars, AI has offered cool and innovative applications, which have only been the subject of science fiction. Today, AI has reached a level of precision where it can understand human emotions too. The power of AI to make machines ‘smart’ and ‘intelligent’ has triggered a lot of industries to invest in AI projects. The decision of leveraging AI to aid digital transformation is pretty understandable. But, companies should first analyze the potential barriers to AI adoption so that they can enjoy successful AI implementation. While companies think of leveraging AI for transforming their existing workflows, they should keep in mind these potential hurdles, and plan their journey with AI accordingly.


IoT Trends to Watch for in 2019

Throughout 2018, the staggering growth of digital assistants, such as Amazon’s Alexa and Google Home, showed that smart devices are here to stay. While the concept of smart toasters has been a long-running joke, analysts forecast strong growth among consumer-facing IoT devices of all shapes and sizes. A bit of smart technology can simplify our home lives; automated vacuum cleaners have long been popular, but adding in some smart technology can make them even more useful. Smart technology is already make home security systems far more capable, and bringing smart technology into the kitchen can make it easier to save time while preparing meals. Businesses are certainly looking into investing in smart technology, and smart desks and smart walls are expected to become far more common in 2019. If there’s any perceived benefit of adding IoT technology to a consumer device, someone is likely to offer it for sale, even if the value added is dubious. Edge devices are have become staples of typical IoT installations, as they allow for more efficient operations and better responsiveness. 


Azure Service Fabric Mesh: A Platform for Building Mission Critical Microservices


The Service Fabric Cluster provides you with a reliable and scalable cluster of VMs running the service fabric runtime into which you deploy and manage your applications/services (containerized or non-containerized) via a highly available cluster endpoint. The service fabric runtime makes the service placements decisions based on the integration it has with the underlying azure infrastructure, making them reliable. When using Azure Service Fabric Clusters, you have to administrator access to not only your cluster, but also the VMs that make up the cluster. You pick the VM SKUs to meet your needs, you get to decide on the network security rules and the autoscale rules by which you want to scale the cluster. You can set up automatic upgrades of the service fabric runtime and the VM operating system. With this offering, you are only paying of the VMs, Storage and Networking resources you use, the service fabric runtime is effectively free. It is great fit for customers/ISVs who want need full control for the infrastructure.


Adding Object Detection with TensorFlow to a Robotics Project


My robot uses the Robot Operating System (ROS). This is a de facto standard for robot programming and in this article we will integrate TensorFlow into a ROS package. I'll try and keep the details of the ROS code to a minimum but if you wish to know more can I suggest you visit the Robot Operating System site and read my articles on Rodney. Rodney is already capable of moving his head and looking around and greeting family members that he recognises. To do this we make use of the OpenCV face detection and recognition calls. We will use TensorFlow in a similar manner to detect objects around the home, like for instance a family pet. Eventually the robot will be capable of navigating around the home looking for a particular family member to deliver a message to. Likewise imagine you are running late returning home and wished to check on the family dog. With the use of a web interface you could instruct the robot to locate the dog and show you video feed of what it's doing. Now in our house we like to say our dog is not spoilt, she is loved.


The Digital Twin Organization: Can Enterprise Architecture Help?

The Digital Twin of the Organization is a concept created by Gartner. Quite simply, it is predicated on using a digital representation of an organization (its business model, strategies etc.) to better plan and execute a business transformation initiative. The whole idea behind the digital twin concept, and the reason why it is so useful, is that it offers a virtual model that can be analyzed and tweaked more easily than the real thing. The new insights and efficiencies you uncover this way can in turn be used to improve the organization.  Model is the key word here. Models are massless, frictionless, virtually free, reusable, and – importantly – they are also the lifeblood of enterprise architecture. Thus, EA is by default positioned to play a key part in taking the Digital Twin of the Organization from concept to reality. We have been arguing the importance of a model-based approach to business change for quite some time on this blog, now it seems the future is starting to catch up. Let us have a more detailed look at how exactly EA helps, and offer some examples based on the BiZZdesign suite.



Quote for the day:


"To be able to lead others, a man must be willing to go forward alone." -- Harry Truman


Daily Tech Digest - December 01, 2018

Angry man yelling on phone while reading vintage printer paper report. Photo by SHutterstock
Blockchain has been wildly mis-sold, but underneath it is a database with performance and scalability issues and a lot of baggage. Any claim made for blockchain could be made for databases, or simply publishing contractual or transactional data gathered in another form. Its adoption by non-technical advocates is faith-based, with vendors' and consultants' claims being taken at face value, as Eddie Hughes MP (Con, Walsall North) cheerfully confessed to the FT recently. "I'm just a Brummie bloke who kept hearing about blockchain, read a bit about it, and thought: this is interesting stuff. So I came up with this idea: blockchain for Bloxwich," said Hughes. As with every bubble, whether it's Tulip Mania or the Californian Gold Rush, most investors lose their shirts while a fortune is being made by associated services – the advisors and marketeers can bank their cash, even if there's no gold in the river.


Building Resilient Data Multiclouds


Resilience is risk mitigation that is engineered into all your IT assets. It’s the confidence that your infrastructure won’t fail you ever, especially in times of crisis. Resilience that’s baked into the True Private Cloud ensures that businesses can weather those once-in-a-lifetime “black swan,” “perfect storm,” and other disruption scenarios that can put them and their stakeholders out of business permanently. To evolve toward this resilience architecture, enterprises must take steps to ensure that their migration of data, analytics, and other IT infrastructures to the cloud environments are comprehensively resilient. The path to the True Private Cloud requires a keen focus on building unshakeable resilience into distributed data assets. Management information systems and enterprise data warehouse systems have historically been viewed as “second-class citizens” among IT infrastructure platforms. Occasional failures of these systems were viewed as tolerated events.


Global Financial Services Bullish On AI, The 'Disruptive Tech' Frontrunner


AI and its application through machine learning is being increasingly used to automate processes such as credit decision-making and customer interaction as well as help detect fraud, money laundering and even terrorist activity. Capital markets-focused organizations such as investment banks are the furthest down the road in the financial services industry in adopting new disruptive technologies, with a little over half (51%) saying that AI, ...  and just 17% among those in the private wealth industry. Stephanie Miller, Chief Executive Officer of Intertrust, commenting in the wake of the findings said: “With the hype surrounding disruptive technology in the financial sector it is easy to lose sight of reality. The findings from this study suggest that while the industry is positive towards new technology such as AI, blockchain and robotics, only a minority of firms are currently putting it to use and the speed of travel remains cautious.”


Geospatial Data Brings Value Across Industries

Geospatial Data Brings Value Across Industries
Though it may seem like a highly technical concept, most people use some type of geospatial data system every day because such programs are used to route Uber drivers, assess credit risk and lending rates based on zip code, and determine insurance rates by identifying homes at risk of flooding, earthquakes, and other natural disasters. Even kids use geospatial data to play games like Pokemon Go. Geospatial information is everywhere and, in a world where everyone is attached to a smartphone, we’re constantly connected to it. Put simply, geospatial data just means that the information set is tied to zip codes, addresses, or coordinates, among other possibilities. It’s a map or an address book, reinterpreted for a digital ecosystem. Though there are plenty of groups building geospatial data sets, one of the factors that has most contributed to this new digital world is the availability of open data sets.


Built for realtime: Big data messaging with Apache Kafka

big data messaging system / information architecture / mosaic infrastructure
Apache Kafka's architecture is very simple, which can result in better performance and throughput in some systems. Every topic in Kafka is like a simple log file. When a producer publishes a message, the Kafka server appends it to the end of the log file for its given topic. The server also assigns an offset, which is a number used to permanently identify each message. As the number of messages grows, the value of each offset increases; for example if the producer publishes three messages the first one might get an offset of 1, the second an offset of 2, and the third an offset of 3. When the Kafka consumer first starts, it will send a pull request to the server, asking to retrieve any messages for a particular topic with an offset value higher than 0. The server will check the log file for that topic and return the three new messages. The consumer will process the messages, then send a request for messages with an offset higher than 3, and so on.


Expert Excuses for Not Writing Unit Tests

Book Cover
Many studies do show a correlation between LoC and the overall cost and length of development, and between LoC and number of defects. So while it may not be a precise indication of progress, it is not a completely useless metric. The lower your LoC measurement is, the better off you are in terms of defect counts. For a tool to calculate this for you try https://github.com/boyter/scc/ which will also give you a COCOMO estimation. Be sure to run it over projects that have tests and see how much additional cost the tests add. Do this internally if you can with projects that have tests and point out that the tests add some percentage of cost. If you can cherry pick projects to make this look worse the better off you will be. If someone challenges that the project with tests was more successful point out using the same model that the project cost more. More money spent means more quality to most people. If you mix metaphors and ideas here you can also impress and confuse people to the point they will be afraid to challenge you further. Be sure to point out that adding tests means writing more code which takes longer, which also impacts cost. Also be sure to point out that while tests are being written that nobody will be fixing bugs. This is usually enough of an argument to stop everything dead in its tracks.


Confused by AI Hype and Fear? You’re Not Alone

hand raised in the middle of a wheat field
Although AI leaves the door open for other paths to machine intelligence, most advances towards this goal so far have been made using machine-learning algorithms. These have some key characteristics that separate them from other algorithms, and that will define the field if another route to AI is discovered in the near future. Machine learning is primarily concerned with algorithms that can make connections between various annotated data and their output. Crucially, they are also able to learn independently from new, varied output, thereby improving their models without the need for human intervention. This approach lends itself to many of AI’s defining use cases, such as computer vision and machine translation. It’s debatable whether any AI applications to date haven’t derived from machine learning in some way. Almost all current chatbots have been built by machine learning, but there is another approach that some data scientists are considering. Rule-based models are founded on linguistic systems that are developed by experts to imitate the ways humans structure their speech.


Man-in-the-disk attacks: A cheat sheet

Cue a recent discovery by researchers at the software research firm Check Point: An attack they dubbed "man-in-the-disk" (MITD) attacks, which exploit a weakness in Android's handling of external storage to inject malicious code. The exploit allowing MITD attacks has serious repercussions for Android users because it exists at a level that's integral to Android's design. If man-in-the-disk sounds similar to man-in-the-middle (MITM) attacks, it's because there are many ways in which the attacks are similar. Both involve intercepting and often modifying data for nefarious purposes--it's simply the scale that distinguishes between the two attacks. Check Point's researchers found a number of apps--including some from major distributors such as Google--that were vulnerable to MITD attacks. Researchers also managed to build their own apps that took advantage of the exploit.


Want A Bigger Bang From AI? Embed It Into Your Apps


A key element of application-centric AI: Context. Say a sales executive wants to call on important customers in several cities. AI can review the accounts and predict which customers might increase business after a sales call, based on past history, and suggest an itinerary that would maximize ROI from the trip. One common factor in all those buckets is that integrating AI and machine learning into applications lets the app take some type of action automatically. Automation allows many tasks to be performed without human intervention—and without human error, says Swan. AI systems can execute relatively straightforward actions, such as booking a rental car for that sales trip. They can also tackle harder tasks that normally require not only time, but also some level of expertise, such as optimizing business workflows, reviewing financials for anomalies, or finding expense report violations. Often there’s still a human review, but that review can often be done faster, and more accurately, with the AI’s assistance in laying all the groundwork, presenting recommendations, and providing the background, documentation, and reasoning behind those recommendations.


Why open standards are the key to truly smart cities


In collaboration with several partners, including The Open Group, academic institutions and industry players, bIoTope is running a series of cross-domain smart city pilot projects which will provide proofs-of-concept for a wide range of applications, including smart metering, smart lighting, weather monitoring, and the management of shared electric vehicles. These projects will reveal the benefits that can be realised through the use of IoT technology, such as greater interoperability between smart city systems. They will also deliver a much-needed framework for security, privacy and trust to facilitate responsible access to, and ownership of, data on the IoT. Ultimately, bIoTope will deploy smart city pilots in Brussels, Lyon, Helsinki, Melbourne and Saint Petersburg. It is hoped that these pilot schemes will showcase the sustainable business ecosystems that will generate value to end users, solution providers, municipalities and other stakeholders.



Quote for the day:


"Risk more than others think is safe. Dream more than others think is practical." -- Howard Schultz


Daily Tech Digest - November 30, 2018

Man-in-the-middle attacks: A cheat sheet

cybersecurityistock-952069328utah778.jpg
The concept behind a man-in-the-middle attack is simple: Intercept traffic coming from one computer and send it to the original recipient without them knowing someone has read, and potentially altered, their traffic. MITM attacks give their perpetrator the ability to do things like insert their own cryptocurrency wallet to steal funds, redirect a browser to a malicious website, or passively steal information to be used in later cybercrimes. Any time a third party intercepts internet traffic, it can be called a MITM attack, and without proper authentication it's incredibly easy for an attacker to do. Public Wi-Fi networks, for example, are a common source of MITM attacks because neither the router nor a connected computer verifies its identity. In the case of a public Wi-Fi attack, an attacker would need to be nearby and on the same network, or alternatively have placed a computer on the network capable of sniffing out traffic.


Technical Debt Will Kill Your Agile Dreams

Bad engineering decisions are in a different category to ones that were tactically made with full knowledge that the short-term priority was worth it. When it's clear that such a decision was, in fact, a tactical decision, it is much easier to convince people that refactoring needs to happen and the debt has to be paid off. Unfortunately, when the term is used as a polite way of saying bad engineering, it's unlikely there is any repayment strategy in place and it is even harder to create one because first, you need to convince people there is some bad engineering, then you need to convince people it is causing problems, then you have to consider a better approach and then convince various stakeholders of that too. Finally, you need to convince the investment is needed to refactor. It is like trying to win 5 matches in a row away from home when you don't even have your best players. 


3 Keys to a Successful “Pre-Mortem”


The concept of a pre-mortem has been around for years, but only recently have we seen it pick up speed in the engineering community. This is an activity which is run before starting on a big stage in a project, but after doing a product mapping and prioritization activity. Rather than exploring what went wrong after the fact and what to do differently in the future, the goal of a premortem is to identify potential pitfalls and then apply preventative measures. It’s a great idea, but for those new to the concept, it’s easy to overlook some important aspects of the process. To talk about what might go wrong is scary. It acknowledges many things are out of our control, and that we might mess up the things which are within our control. To talk about what might go wrong, and how to adapt to it, acknowledges the possibility of failure. As this is a rare thing in industry, if done initially outside of a structured activity, this can seem like trying to weasel your way out of work.



12 top web application firewalls compared

AWS WAF by itself does not offer the same sort of features you could expect from other solutions on this list, but coupled with other AWS solutions AWS WAF becomes as flexible as any competing solution. Existing AWS customers will see the most value in selecting AWS WaF due to the architecture benefits of staying with a single vendor. ... Each architecture comes with its own set of pros and cons, varying from the simplicity of the SaaS option to the fine-grained control over configuration and deployment with the appliance-based offerings. Barracuda’s various configurations offer very similar functionality, though there are some differences here and there. Server cloaking limits the amount of intel a potential attacker can gain on your configuration by hiding server banners, errors, identifying HTTP headers, return codes, and debug information. Server cloaking is available on all versions of the web application firewall, as is DDoS protection.


Creating a Turing Machine in Rust


A Turing machine is a mathematical model of computation that reads and writes symbols of a tape based on a table rules. Each Turing machine can be defined by a list of states and a list of transitions. Based on a start state (s0), the Turing machine works its way through all the states until it reaches a final state (sf). If no transition leads to the final state, the Turing machine will run ‘forever’ and eventually run into errors. A transition is defined by the current state, the read symbol at the current position on the tape, the next state and the next symbol that must be written to the tape. Additionally, it contains a direction to determine whether the head of the tape should move the left, right or not at all. To visualize this process, let’s take a look at a very simple Turing machine that increments the value from the initial tape by one. ... While this is a very simple Turing machine, we can use the same model to create machines of any complexity. With that knowledge, we are now ready to lay out the basic structure of our project.


Tech support scammers are using this new trick to bypass security software

Symantec describes this kind of attack technique as 'living off the land', whereby attackers exploit legitimate features in systems to hide malicious activity. In of itself obfuscation isn't malicious, but it can be used for malicious purposes. "There are many open source tools to obfuscate code as developers don't want their code to be seen by the users of their software. Similar is the case with encryption algorithms like AES. Such algorithms have wide usage and implementations in the field of data security," said Siddhesh Chandrayan, threat analysis engineer at Symantec. "Both these mechanisms, by themselves, may not generate an alarm as they are legitimate tools. However, as outlined in the blog, scammers are now using these mechanisms to show fake alerts to the victims. Thus, scammers are 'living off the land' by using 'inherently non-malicious' technology in a malicious way," he added.


Standout predictions for the cloud – a CTO guide

Standout predictions for the cloud – a CTO guide image
“Many businesses have previously shied away from true multi-cloud deployments by favouring public infrastructures due to the perceived expense of private platforms, rooted in the required expertise necessary to run them. However, recent technological developments that enable businesses to take a highly-automated approach have shown that this is now an outdated view of cloud infrastructure. When it comes to transforming with cloud technologies, multi-cloud is proving itself to be the correct endgame for businesses in all industries.” ... “Enterprises are eliminating all the “state” from their endpoint devices, where any changes are stored only temporarily on the device and are quickly and efficiently on-ramped to the organisation’s cloud. “One key benefit, aside from IT efficiency gains, is that it represents an elimination of the “dark data” that was previously stored in employees’ laptops or desktops. Suddenly, all this “dark” data is right at your fingertips – stored in the cloud– as a searchable, analysable and shareable repository.”



Typemock vs. Google Mock: A Closer Look

Writing tests for C++ can be complicated, especially when you are responsible for maintaining legacy code or working third-party APIs. Fortunately, the C++ marketplace is always expanding, and you have several testing frameworks to choose from. Which one is the best? In this post, we'll consider Typemock vs. Google Mock. We'll use Typemock's Isolator++ and Google Mock, the C++ framework that is bundled with Google Test, to write a test function for a small project. As we implement the tests, we'll examine the difference in how the frameworks approach the same problem. ... Fowler defines an order object that interacts with a warehouse and mail service to fill orders and notify clients. He illustrates different approaches for mocking the mail service and warehouse so the order can be tested. This GitHub project contains Fowler's classes implemented in C++ with tests written in Google Mock. Let's use those classes as a starting point, with some small changes, for our comparison.


Caching can help improve the performance of an ASP.NET Core application. Distributed caching is helpful when working with an ASP.NET application that’s deployed to a server farm or scalable cloud environment. Microsoft documentation contains examples of doing this with SQL Server or Redis, but in this post,I’ll show you an alternative. Couchbase Server is a distributed database with a memory-first (or optionally memory-only) storage architecture that makes it ideal for caching. Unlike Redis, it has a suite of richer capabilities that you can use later on as your use cases and your product expands. But for this blog post, I’m going to focus on it’s caching capabilities and integration with ASP.NET Core. You can follow along with all the code samples on Github. ... No matter which tool you use as a distributed cache (Couchbase, Redis, or SQL Server), ASP.NET Core provides a consistent interface for any caching technology you wish to use.


7 reasons why artificial intelligence needs people


As AI projects roll out over the next few years, we will need to rethink the definition of the “work” that people will do. And in the post-AI era the future of work will become one of the largest agenda items for policy makers, corporate executives and social economists. Despite the strong and inherently negative narrative around the impact on jobs, the bulk of the impact from the automation of work through AI will result in a “displacement” of work not a “replacement” of work – it’s easy to see how the abacus-to-calculator-to-Excel phenomenon created completely new work around financial planning and reporting, and enterprise performance management. Similarly, AI will end up accelerating the future of work and resulting displacement of jobs will be a transition already in place, not an entirely new discussion. As some work gets automated other jobs will get created, in particular ones that require creativity, compassion and generalized thinking.



Quote for the day:


"A single question can be more influential than a thousand statements." -- Bo Bennett


Daily Tech Digest - November 29, 2018

Closing the Awareness Gap in Technology Projects


The symptoms of a problem with operational awareness can vary. Sometimes you fail to obtain visibility at the level of accuracy you need; sometimes you get that visibility, but don’t know how to act on it; sometimes, even when insights lead to actions, these actions fail to lead to your desired results. If you’re trying, for example, to reduce time delays, your data analytics might show which parts of your project are moving more slowly than expected, but they’re unlikely to pinpoint the precise reason. Problems in one place might be the result of decisions made several steps back in the supply chain or project life cycle. Was planning off? Did procurement write a poor contract? Maybe your workers lack the necessary skills? The experience of using the system may also make it difficult for you and your employees to make sense of the data effectively. For example, in our work with boards of directors, who are taking a growing role in overseeing high-value projects, we sometimes observe members relying heavily on dashboards or documents developed with sophisticated data analytics. 



Three steps toward stronger data protection

Applications responsible for originally sourcing data into the system or modifying data as part of business transactions should also be responsible for digitally signing data before persisting them into databases. Any application retrieving such data for business use must verify the digital signature before using the data or refuse to use data whose integrity has been compromised. These are concrete steps companies can begin to take immediately to protect themselves. Enabling FIDO within a few weeks into web applications is now possible; incorporating encryption with secure, independent key management systems into applications can be accomplished within a few months. Integrating digital signatures may be accomplished at the same time as encryption or pursued as a subsequent step. By enabling these security controls, companies place themselves far, far ahead of where the vast majority of attacks currently occur.


Google Faces GDPR Complaints Over Web, Location Tracking
Even though Location History is off by default, Google appears to encourage its users to turn it on through overly simplified and carefully designed user interfaces that may drive users to hit "approve." In contrast to the ease of enabling the feature, any user who wants to research what their choice might mean must undertake extra clicks or explore multiple submenus, Forbrukerrådet's report contends. These design choices may contradict GDPR's requirement for "specific and informed" consent, Forbrukerrådet says. "Users will often take the path of least resistance in order to access a service as soon as possible," the report says. "Making the least privacy friendly choice part of the natural flow of a service can be a particularly effective dark pattern when the user is in a rush or just wants to start using the service." Forbrukerrådet contends that if users don't click on Location History at the start, Google keeps trying to get them to enable it. For example, the report contends that in order to keep location-tracking disabled, users must again decline it when trying to use Google's Assistant, Maps and Photos apps.


Data Science “Paint by the Numbers” with the Hypothesis Development Canvas

The one area of under-invested in most data science projects is the thorough and comprehensive development of the hypothesis or use case that is being tested; that is, what it is we are trying to prove out with our data science engagement and how do we measure progress and success.  To address these requirements, we developed the Hypothesis Development Canvas – a “paint by the numbers” template that we will populate prior to executing a data science engagement to ensure that we thoroughly understand what we are trying to accomplish, the business value, how we are going to measure progress and success, what are the impediments and potential risks associated with the hypothesis. The Hypothesis Development Canvas is designed to facilitate the business stakeholder-data science collaboration


6 Tips To Frame Your Digital Transformation With Enterprise Architecture


Call it digital transformation strategy—call it smart business—enterprise architecture is a method your company can use to organize your IT infrastructure to align with business goals. This isn’t a new concept. In fact, enterprise architecture has been around since the 1960s. But the overwhelming presence of tech in every facet of business today has forced us to rethink it, and to make it a more central focus of business management. ... Enterprise architecture deals with your organizational structure, business model, apps, and data just as much as it does information technology. When you put it together, you need to think from an employee perspective, a customer perspective, and from the perspective of meeting your business goals. After all, your digital transformation will impact your entire company, and your enterprise architecture will need to support it. Your enterprise architecture is of no use to anyone if no one but IT geeks can understand it. When you develop it, use common language. Create easy-to-understand examples.


Machine learning and the learning machine with Dr. Christopher Bishop

The field of AI is really evolving very rapidly, and we have to think about what the implications are, not just a few years ahead, but even further beyond. I think one thing that really characterizes the MSR Cambridge Research lab is that we have a very broad and multi-disciplinary approach. So, we have people who are real world experts in the algorithms of machine learning and engineers who can turn those algorithms into scalable technology. But we also have to think about what I call the sort of penumbra of research challenges that sit around the algorithms. Issues to do with fairness and transparency, issues to do with adversaries because, if it’s a publication, nobody is going to attack that. But if you put out a service to millions of people, then there will be bad actors in the world who will attack it in various ways. And so, we now have to think about AI and machine learning in this much broader context of large scale, real-world applications and that requires people from a whole range of disciplines.


Cloudlets extend cloud power to edge with virtualized delivery


With a cloudlet, there tend to be fewer users and they connect over a private wireless network. Cloudlets are also generally limited to soft-state data, such as application code or cached data that comes from a central cloud platform. In some ways, cloudlets are more like private clouds than public clouds, especially when it comes to self-management. With both cloudlets and private clouds, organizations deploy and maintain their own environments and determine the delivery of services and applications. Cloudlets also limit access to a local wireless network, whereas private clouds are available over the internet and other WANs to support as many users as necessary -- although nowhere near the number of users public clouds support. The private cloud theoretically serves users wherever they reside, whenever they need and from any device capable of connecting to the applications. In contrast, cloudlets are specific to mobile and IoT devices in close proximity.


KingMiner malware hijacks the full power of Windows Server CPUs

The malware generally targets IIS/SQL Microsoft Servers using brute-force attacks in order to gain the credentials necessary to compromise a server. Once access is granted, a .sct Windows Scriptlet file is downloaded and executed on the victim's machine. This script scans and detects the CPU architecture of the machine and downloads a payload tailored for the CPU in use. The payload appears to be a .zip but is actually an XML file which the researchers say will "bypass emulation attempts." It is worth noting that if older versions of the attack files are found on the victim machine, these files will be deleted by the new infection. Once extracted, the malware payload creates a set of new registry keys and executes an XMRig miner file, designed for mining Monero. The miner is configured to use 75 percent of CPU capacity, but potentially due to coding errors, will actually utilize 100 percent of the CPU. To make it more difficult to track or issue attribution to the threat actor, the KingMiner's mining pool has been made private and the API has been turned off.


Managing a Real-Time Recovery in a Major Cloud Outage

system fail situation in network server room
While Always On Availability Groups is SQL Server’s most capable offering for both HA and DR, it requires licensing the more expensive Enterprise Edition. This option is able to deliver a recovery time of 5-10 seconds and a recovery point of seconds or less. It also offers readable secondaries for querying the databases (with appropriate licensing), and places no restrictions on the size of the database or the number of secondary instances. An Always On Availability Groups configuration that provides both HA and DR protections consists of a three-node arrangement with two nodes in a single Availability Set or Zone, and the third in a separate Azure Region. One notable limitation is that only the database is replicated and not the entire SQL instance, which must be protected by some other means. In addition to being cost-prohibitive for some database applications, this approach has another disadvantage. Being application-specific requires IT departments to implement other HA and DR provisions for all other applications.


Reputational Risk and Third-Party Validation

Security ratings are increasingly popular as a means of selecting and monitoring vendors. But Ryan Davis at CA Veracode also uses BitSight's ratings as a means of benchmarking his own organization for internal and external uses. "Taking somebody's word for it isn't enough these days," says Davis, an Information Security Manager at CA Veracode. "You can't just say 'Oh, yeah, well that person said they're secure ..." For CA Veracode, security ratings provided by BitSight offer validation to prospective customers. "We want [customers] to be able to have that comfort that somebody else is also asserting that we're secure." In an interview about the value of security ratings, Davis discusses:
How he employs BitSight Security Ratings; The business value - internally and externally; and How these ratings can be a competitive differentiator. Davis is CA Veracode's Information Security Manager. He is responsible for ensuring the security and compliance of thousands of assets in a highly scalable SaaS environment. Davis has more than 15 years of experience in information technology and security in various industries.



Quote for the day:


"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward