Daily Tech Digest - October 29, 2018

OpenStack Foundation releases software platform for edge computing

OpenStack Foundation releases software platform for edge computing
StarlingX is based on key technologies Wind River developed for its Titanium Cloud product. In May of this year, Intel, which owns Wind River, announced plans to turn over Titanium Cloud to OpenStack and deliver the StarlingX platform. StarlingX is controlled through RESTful APIs and has been integrated with a number of popular open-source projects, including OpenStack, Ceph, and Kubernetes. The software handles everything from hardware configuration down to host recovery and everything in between, such as configuration, host, service, and inventory management services, along with live migration of workloads. “When it comes to edge, the debates on applicable technologies are endless. And to give answers, it is crucial to be able to blend together and manage all the virtual machine (VM) and container-based workloads and bare-metal environments, which is exactly what you get from StarlingX,” wrote Glenn Seiler, vice president of product management and strategy at Wind River, in a blog post announcing StarlingX’s availbility.



3 best practices for improving and maintaining data quality

Poor data quality makes extensive impact on business including wrong product delivery, off the mark forecasts, inadequate planning, rework, poor customer experience and loss of reputation. Most of the factors affecting data quality are the defining elements such as accuracy, completeness and consistency. In the case of healthcare services, for example, inaccurate patient information and health records lead to adverse health outcomes. For retail business, inconsistency in the customer contact details not only creates delivery issues and customer complaints but also misses marketing opportunities. For all data, validity is always crucial. If data is not validated against the defined parameters such as format, range, and source, it is as good as absent. Depending on the urgency and critical nature of the operations, other factors specific to industries may become equally important. ... Finally, with no ambiguity, overlap or duplication, reliability of data across all sources is absolutely essential for high data quality.


British Airways data breach worse than thought


“It demonstrates that enterprises still do not have in place robust enough security to protect their back-end systems and databases, or the measures in place to identify these attacks in real time and cut them off as soon as abnormal activity is detected. “It is not beyond the means of organisations, especially those that process and manage such sensitive and critical information, to put in place tools that can analyse and detect threats or the exfiltration of data over a significant period of time.” This was especially important, said Carter, because it would then put the onus on affected customers to notify their financial services providers for any fraud they may become a victim of. LogRhythm vice-president and Europe, Middle East and Africa (Emea) managing director, Ross Brewer, added: “If I were BA, I would be very worried about the impact both breaches will have on the company’s reputation. The fact that both data breaches have taken place in the past six months is extremely worrying – and very embarrassing for the airline.


3 Keys to Reducing the Threat of Ransomware

Wouldn't it be more sensible to pay for a third-party review of security hygiene and posture, and bolster it wherever it's lacking, including penetration testing? Why rebuild? Maybe there was something wrong in the IT architecture, or the systems were outdated and needed replacement. Maybe the fear of something being left behind that might cause reinfection was too much to bear. We may never get the full story, but we do know the enormous cost of rebuilding these systems. As a CIO, I experienced numerous attempted ransomware attacks and several instances of server encryption, or attempted encryption, where we were able to take servers out of rotation. Fortunately, ransomware then was not what it is now, and though we were attacked our backups were not affected. Luck wasn't the only reason we were able to recover so quickly. We used good cyber hygiene and best practices to reduce the hacking threat. We also took snapshots of our infrastructure every 30 minutes, with full backups nightly. We always recovered with minimal data loss.


China has been 'hijacking the vital internet backbone of western countries'

china-telecom-bgp-hijack.jpg
The research duo says they've built "a route tracing system monitoring the BGP announcements and distinguishing patterns suggesting accidental or deliberate hijacking." Using this system, they tracked down long-lived BGP hijacks to the ten PoPs --eight in the US and two in Canada-- that China Telecom has been silently and slowly setting up in North America since the early 2000s. "Using these numerous PoPs, [China Telecom] has already relatively seamlessly hijacked the domestic US and cross-US traffic and redirected it to China over days, weeks, and months," researchers said. "While one may argue such attacks can always be explained by normal' BGP behavior, these, in particular, suggest malicious intent, precisely because of their unusual transit characteristics -namely the lengthened routes and the abnormal durations." In their paper, the duo lists several long-lived BGP hijacks that have hijacked traffic for a particular network, and have made it take a long detour through China Telecom's network in mainland China, before letting it reach its intended and final destination.


How to protect your organization from insider threats

Modern DLP solutions are intelligent data loss prevention systems, combining multiple disciplines including user activity monitoring, behavior analytics, and forensics in order to increase the effectiveness of a DLP implementation. These comprehensive DLP solutions allow for broader and more capable oversight to be implemented that can analyze user behavior, assign risk scores, and take action based on a complex set of user activities and data access. With human behavior-driven data loss prevention, organizations have emphasis on user activity monitoring and the ability to define and then dynamically update risk scores for different types of users. Leveraging machine learning and artificial intelligence to identify the anomalies, DLP can take action based on users’ behavior. Insider threats and DLP are a hot topic of conversation between at board meetings. This is a positive trend as it ensures visibility at the board level to the risks associated with insider threats and the urgency of a comprehensive DLP strategy to minimize data exfiltration risk. 


PoC Attack Leverages Microsoft Office and YouTube to Deliver Malware


According to a Cymulate analysis posted on Thursday, the team found that it’s possible to edit that HTML code to point to malware instead of the real YouTube video. “A file called ‘document.xml’ is a default XML file used by Word that you can extract and edit,” Avihai Ben-Yossef, CTO at Cymulate, explained to Threatpost. “The embedded video configuration will be available there, with a parameter called ’embeddedHtml’ and an iFrame for the YouTube video, which can be replaced with your own HTML.” In the PoC, the replacement HTML contains a Base64-encoded malware binary that opens the download manager for Internet Explorer, which installs the malware. The video will seem to be legitimate to the user, but the malware will unpack silently in the background. “Successful exploitation can allow any code execution – ransomware, a trojan,” Ben-Yossef said, adding that detection by antivirus would depend on the specific payload’s other evasion features. Obviously, the attack would work best with a zero-day payload.


Machine Learning Becomes Mainstream: How to Increase Your Competitive Advantage

Machine learning is a part of predictive analytics, and it is made up of deep learning and statistical/other machine learning. For deep learning, algorithms are applied that allow for multiple layers of learning more and more complex representations of data. For statistical/other machine learning, statistical algorithms and algorithms based on other techniques are applied to help machines estimate functions from learned examples. Essentially, machine learning allows computers to train by building a mathematical model based on one or more data sets. Then those computers are scored when they may make predictions based on the available data. So when should you apply machine learning? ... With the right machine learning strategy, the barriers to adoption are actually fairly low. And, when you consider the reduced TCO and increased efficiency throughout your business, you can see how the transition can pay for itself in very little time. As well, Intel is dedicated to establishing a developer and data science community to exchange thought leadership ideas across disciplines of advanced analytics.


1 threat intelligence feeds hand swiping tablet mobile device
The US NVD is slow; the media gap between a vulnerability becoming public and appearing on the list is seven days. China’s NVD is quicker to upload public vulnerabilities, but has been accused of altering data to hide government influences. The Russian NVD, run by the country’s Federal Service for Technical and Export Control of Russia, misses many vulnerabilities and is slow with what it does publish. Good threat intelligence is more than a list of vulnerabilities. Instead of relying on NVDs alone to power your vulnerability scanning, companies should look to other sources to supplement their threat intelligence operations. According to a study by Tenable, over a third of vulnerabilities have a working exploit available on the same day of disclosure, giving hackers days or more of unfettered opportunity to attack. By broadening the scope of your intelligence gathering, you can close the window of opportunity for cybercriminals and gain a richer set of data with which to defend yourself.


Services are everywhere, if we only have the lens to see them. Regrettably, we often notice them only when they are dissatisfying. Not long ago, I “discovered” an internal service in my organization: my team created a presentation to give to leadership, so we wanted it to look polished. Unfortunately, none of us had visual-design chops, so we requested someone from our design team to help. The reply was “Is there a due date?”. We didn’t have a deadline (yet), but we also had no idea when our understandably busy colleagues would be able to turn it around. This is clearly a (design) service for internal customers who have an idea of what makes it fit for their purpose. In this case, it was a reliable turnaround time. We all make requests of individuals and teams all the time. But without a mutual exchange of information -- for example, expected delivery speed -- we’re going to pad our requests with extra time or fake deadlines. 



Quote for the day:


"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer



Daily Tech Digest - October 28, 2018


Sophia shared several messages with the president and responded to what he was saying. Sophia’s structural framework includes a camera that looks for visual cues such as facial expressions when deciding when to keep a conversation moving. The humanoid, whom Hanson Robotics gave an Audrey Hepburn look for this leg of the trip, started the conversation by telling Aliyev how she had obtained an ASAN visa at Baku airport and described Baku’s architecture. Sophia praised Aliyev for championing the ASAN initiative; “Your visit here [at the ASAN complex dedication] underscores the special attention you are giving to e-governance and the innovation ecosystem.” The visit mainly underscored two things; firstly that the ASAN agency is harnessing the electronic and cyber worlds to make citizens’ lives better, and secondly, it further shows Hanson Robotics is capable of making a robot whose artificial intelligence can not only help it learn tasks — like cleaning — but also have robust conversations with humans.



It’s banking Jim, but not as we know it

A completely different FinTech world had also emerged out of Asia, and many suddenly woke up to the fact that they hadn’t even been looking. By way of example, in 2018, Alipay and WeChat Pay EACH processed more dollars in a month through their apps than PayPal processes in a year. China has seen an explosion of online mobile payments, rising from $5 trillion in 2016 to $15.5 trillion in 2017 and predicted to boom to $45 trillion in 2020. Compare this to the USA and you see a quiet revolution, and it is not just about Alibaba and Tencent, but the whole FinTech scene emerging from Asia, Africa and South America. This FinTech scene began without the blinkers of big bank thinking, and has created wholly integrated internet finance on mobile apps, or superapps, seamlessly. ... Banks should feel duly threatened by FinTech 3.0 because they are control freaks by nature, who partner with no one unless they have to. For a big bank to about face and start to become an open market collaborator is a huge cultural change and, in the meantime, the challenger banks are actively building their ecosystems. FinTech 3.0, which starts around now and will play through 2025, will be the most interesting of these three phases as yes, it truly does disrupt banking.


Enterprise Architecture Governance: A Holistic View

trending_large-5
Enterprise Architecture Governance is a practice encompassing the fundamental aspects of managing a business. It involves firm leadership, a complete knowledge of organizational structure, a confident direction, and the enablement of effective IT processes to promote an enterprise’s strategies. However, if distilled into just one area, the objective of EA Governance is to harmonize the architectural requirements of an enterprise into an understandable set of policies, processes, procedures and standards—all of which to ensure an organization’s visions and standards are aligned with actual business requirements. It is not an academic discipline detached from present reality, nor is it based on speculations of what is and what is not occurring. EA Governance is an integral part of deploying and maintaining business strategies. And in many ways, it is a never-ending job. Without EA Governance, an organization is likely to tumble into a web of non-standardized technology, bad product purchases or development, and monolithic architectures.


Designing Organisations with Purposeful Agile

If we think of organisations as a living system, similar to an individual human being, I like to define the culture of an organisation as its "unconscious" part. The first thing needed to change any culture is to become aware of the installed one. This is a very difficult piece. People love to do things rather than observe what things they do, and how they do them. Revealing the installed culture may be one of the most difficult parts to get to the 3rd stage. There is no change that can happen if there is no space for change. Usually the "change space" is filled with our common beliefs and mental models that make us behave the way we behave and make decisions the way we do. An example of creating space for change is working on managers’ agenda, making them available for their teams. It enables listening to the way the organisation operates and seeing what emerges. Freeing the busy agendas allows change to take place and helps us sense and grow awareness of the "installed culture".


Enterprise Agility Through Enterprise Architecture


Understanding Enterprise Architecture capabilities is a very important aspect while driving this transformation as it supports the representation of the business and IT aspects of the enterprise and their inter-relationships and dependencies. Enterprise Architecture is depiction of the target structure for organizations processes. It describes how business goals are realized by business processes, and how this business processes can be better served through technology. It has a critical role and is a strategic tool in addressing how business aligns well with the IT teams for addressing the changing business needs (changes from both business process and technology perspective); how the complexity in handling these challenges can be simplified by breaking down further into multiple aspects to tackle them; to deal with the dependencies between various upstream and downstream systems of various portfolios/LOBs by taking an abstraction view at each layer.


Behavioral Economics & Enterprise Architecture (2): Enterprise Architecture

The promise of enterprise architecture is that it helps improve decision making. Typically, the role of the enterprise architect is to advise and enable other stakeholders to make better decisions. Therefore, Enterprise Architecture – more than anything else – is a social discipline, in that it demands social skills and interaction in order for practitioners to successfully engage with stakeholders and change their behavior.  Not surprisingly, enterprise architects are more effective in steering decisions when they consider that they are dealing with Humans. And Humans, as we’ve explained in our previous blog, can be irrational, naïve and impulsive. By taking these biases into consideration and making choices as easy as possible for decision makers, architects can dramatically increase the likelihood of getting their point across and ultimately help deliver better business outcomes for the organization. Here, we present some principles to get you started.


How to build your enterprise architecture using the cloud

null
Due to the risks and implications of cybercrime and data breaches, many businesses are opting for a security-first tenet – and rightly so. The emphasis will be making sure that data is as secure as possible, both while in transit and during storage. In addition, all workloads may need to be authorised by the security team prior to deployment. Some companies operate on a zero-trust basis with their cloud service provider, retaining control of all encryption keys (e.g. managing key rotation, storing keys on an HSM, etc.). Others operate on a total trust basis, relying on the cloud provider's own enterprise-grade security processes to keep data secure. Whichever level of trust you employ depends on a variety of factors, such as your security approach and your familiarity with the cloud provider you’re using. The importance of prioritising tenets cannot be understated. If done correctly, they will help you to frame your policies, procedures and standards, building an enduring foundation for your enterprise architecture.


How to Build a Secure API Strategy for the API Economy

APIs could expose a company’s transactional systems to the outside world in unprecedented ways. Systems of record are not intended to be available publicly. As such, development teams must test, test, and retest APIs stringently before release. Once developers embed an unsecured API into an enterprise's applications, it can infiltrate and reduce that organization’s overall security posture. Some enterprises have hundreds of consumer-facing web applications, and each of those websites could have anywhere from five to 32 vulnerabilities — that’s a staggering risk of exposure. Sometimes, developers mistake the capabilities of API management tools and expect them to solve all API security challenges. API management tools do provide security policies that work at the perimeter, but not all of them play a role in securing the business logic that serves up the API. For this, developers need to treat APIs as yet another form factor for their applications and ensure that adequate attention is paid to securing them.


Edge Computing: The Driving Force in New Architecture Innovation

Every edge computing project starts with collecting data from IoT devices, sensors or mobile users, and the success of the project depends on ability to transform the data and into actionable intelligence while delivering a return on investment. Initial deployments start with connecting gateway-class devices to IoT devices/sensors at the edge and performing most of the data processing in the server class infrastructure in a backend data center or cloud. Gateways perform the data collection, aggregation, and filtering, and send the useful data for processing to a centralized cloud or data center. As we see larger scale deployments and more devices (e.g. smart factories, oil & gas, connected vehicles), server class infrastructure will move closer to edge to enable data processing and decision making at scale with lower latency. These servers may reside on-premise at the edge location where sensors and gateways are located or they may reside at a central office or micro-data centers between the edge location and backend centralized cloud.


How This Blockchain Innovation Could Impact Billions

Athman Ali, CEO of impact investing firm 1000 Alternatives, has partnered with Everest to address some of the economic challenges facing people and countries in Sub-Saharan Africa. The partnership was originally focused on reducing transportation costs in the region—cut costs of transport and the cost of almost everything else will fall, too. “We have since deepened our partnership to position the Everest platform for use by innovators, incubation hubs and social entrepreneurs to spur innovations in the various areas that 1000 Alternatives focuses towards achieving positive social impact,” Ali says. The expanded relationship now includes another area with the potential to impact people in dramatic ways. “Legal identity, land titles tracking and documentation of ownership and transfers of assets remain a big challenge in African countries,” Ali says. “Opportunities to use the blockchain platform to improve services in education, health and livelihoods is another set of opportunities.”



Quote for the day:


"Nobody in your organization will be able to sustain a level of motivation higher than you have as their leader." -- Danny Cox


Daily Tech Digest - October 27, 2018


EA can play an important role in defining the strategy of the business itself, rather than coming into the picture when the strategy has been written out at a corporate level and then look for the implications of that strategy for IT or technology, data or processes. Enterprise Architects by the nature of what they do have the capability to understand the customer’s point of view. So far, they’ve understood the challenges and needs of the internal stakeholders. If they take this skill outside the business, they’ll also be able to capture what the customer wants from an end-to-end journey point of view. Once that is established, other elements like organization, process, data, and technology can facilitate realizing the goal. Enterprise Architects are good at connecting the dots. That’s why they should be interested in polishing Design Thinking skills and positioning themselves in discussions closer to the consumer. ... Whatever organization we work for, it’s all about people. The people who work for that organization and the people these organizationsare here to serve. If the focus is toward people, we can Design Think the Digital Enterprise for Business Transformations – either today or for tomorrow.



A Brief History of High-Performing Teams by Jessica Kerr

Kerr presented the term "Symmathesy," first proposed by Nora Bateson, derived from Sym, meaning together, and Mathesie, meaning learning. While originally coined to describe ecologic systems that are constantly changing, the term can be applied more generally. A system is not the sum of its parts – that would be an aggregate. Rather, the parts of a system are the sum of their past interactions. Kerr argues that a software system is even more of a symmathesy than these historical examples. It's not simply the software team, but also includes the customers, as well as the running software, the database, the hardware, and all the tools the team uses. These parts all interact with each other and create mutual learning, making the team, and its members, grow and evolve. Participating in such an environment requires showing up to work with your whole self, prepared to be part of the living system. Kerr says this is why adopting such a mindset is so hard. Beyond thinking about yourself or just your team, you have to think much more broadly, ab your organization, division or company as a whole, creating bridges to other teams where necessary.



Visa B2B Connect is a distributed ledger-based platform, which aims to provide financial institutions with a simple, fast and secure way to process cross border business-to-business payments globally. B2B Connect’s digital identity feature tokenizes an organization’s sensitive business information, such as banking details and account numbers, giving them a unique identifier that can be used to facilitate transactions on the platform. In preparation for the commercial launch next year, Visa is expanding partnerships to add additional functionality to the B2B Connect platform. As part of the B2B Connect platform, Visa is integrating open source Hyperledger Fabric framework from the Linux Foundation with Visa’s core assets. This will help provide an improved process to facilitate financial transactions on a scalable, permissioned network. The work between Visa and IBM will enable Visa’s mutual financial institutional clients and ecosystem to maximize the network.



Addressing third-party cyber risk is challenging and significant. For larger organisations, procurement decisions are usually made without input from those responsible for cyber security, and such agreements can provide access to critical systems via open application programming interfaces (APIs) and other interaction mechanisms. Supplier relationships are also overwhelming without a standard process to manage cyber risk when the relationship is via an arms-length contractual arrangement. Many organisations are struggling to address their internal network security issues and have not sufficiently considered the risks beyond their own network. But third-party cyber security risk is too significant and too dangerous an issue for board members to continue to overlook. Current regulatory initiatives including the Networks and Information Systems (NIS) Directive and GDPR require organisations to take responsibility for ensuring that external suppliers have implemented adequate cyber security measures.



This time it’s personal: the financial industry is banking on AI to better serve customers

Man hand using online banking and icon on tablet screen device in coffee shop. Technology Ecommerce Commercial. Online payment digital and shopping on network connection. All on tablet screen are design up.
If fintech stood alone, I don’t think banks would rush to evolve. Financial institutions probably won’t lose much sleep over fintech in the next three to five years. ... Many tech companies see fintech and startups as enablers to get into finance. All of a sudden those enablers become very powerful, very quickly, and that’s a big misconception of the financial market. These tech companies, however, don’t want to become banks. For them, it’s about adding value for their customers. If you can give a customer multiple fintech services, then they’re more likely to choose the convenience of your platform. ... We’ve seen large financial institutions show that it’s possible to manage their legacy operations and daily business, while at the same time, almost separately, fostering a more agile startup mentality for transformation. New business ventures mean that this startup mentality must be separated which, of course, also means that more money has to be spent.


The Four Building Blocks of Transformation


The conventional response is a transformation initiative — a top-down restructuring, accompanied by across-the-board cost cutting, a technological reboot, and some reengineering. Maybe you’ve been through a few such initiatives. If so, you know firsthand how difficult it is for them to succeed. These efforts tend to come in late and over budget, leaving the organization fatigued, demoralized, and not much changed. They don’t take into account the fundamentally new kinds of leverage available to businesses that have emerged in the last 10 years: new networks, new data gathering and analysis resources, and new ways of codifying knowledge. Successful transformations may be relatively rare, but they do exist — and yours can succeed as well. A transformation, in this context, is a major shift in an organization’s capabilities and identity so that it can deliver valuable results, relevant to its purpose, that it couldn’t master before. It doesn’t necessarily involve a single major initiative; but the company develops an ongoing mastery of change, in which adaptability feels natural to leaders and employees.


AI vs. Algorithms: What's the Difference?

Artificial intelligence vs algorithms, a digital technology concept background with the word versus in the middle
According to Mousavi, we should think of the relationship between Algorithm and AI as the relationship between “cars and flying cars.” “The key difference, is that an algorithm defines the process through which a decision is made, and AI uses training data to make such a decision. For example, you can collect data from thousands of driving hours by various drivers and train AI about how to drive a car. Or you can just code it [to say] when [it] identifies an obstacle on the road it pushes the break, [or] when it sees a speed sign, [it] complies. So with an algorithm, you are [setting] the criteria for actions,” he explained. On the other hand, Mousavi said that with AI you, “would not tell the computer what to do because AI determines [what action to take based on the] data that says this is what people almost always do.” ... As it turns out, AI is also known for adopting unsavory behaviors, failing to discern political, social, and at times, even objective correctness from incorrectness. “AI invariably places women


9 data management and security jobs of the future


“The theory behind junk data is often wrong, and we need to fix it,” the authors write. “Data that has not been used by anyone in the past 12 months, has no foreseeable use as initially imagined, and isn’t necessary for regulatory purposes, can still be turned into insights. Just like food waste is a carbon that can be used to produce green energy, data waste is still meaningful if cleaned.” “In this role, you’ll apply analytical rigor and statistical methods to data trash in order to guide decision-making, product development and strategic initiatives. This will be done by creating a ‘data trash nutrition labeling’ system that will rate the quality of waste datasets and manage the ‘data-growth-data-trash’ ratio.” ... “The National Cyber Security Center (NCSC) is seeking a new type of cyber agent, one that not only can defend our national infrastructure but also, if necessary, undertake an offensive against our nation’s adversaries,” the authors say.  To be considered for this critical role, you must display an excellent track record of cyber hacking, ‘grey-hat-focused’ software development or distributed denial of service attack experience.


How FIs Are Combating Increasingly Sophisticated Attacks

One of the biggest challenges for banks is the sheer amount of attack methods. FS-ISAC officials have seen fraudsters use a wide, and expanding, range of techniques. These attacks are not just growing in number, though — they’re becoming more adept at wreaking havoc. “Cybercriminals remain a threat, particularly those who steal money, because they go after banks and their customers — companies like retailers,” Nelson said. “The number of different attacks has really increased over time, and they’re more sophisticated. There’s more malware and more variance emerging all the time.” The ever-growing list of cyberattack methods, paired with the surge in digital transactions, means that banks and FIs that want to avoid becoming the latest victim of cybercrime need to invest in systems that can detect cyberattacks. Modern fraud prevention solutions are built around new, emerging tools and technologies, like machine learning (ML) and artificial intelligence (AI).


Next-Gen Autonomous System Design Made Easier With DDS and ROS


For those unfamiliar, ROS(Robot Operating System) is an open-source software framework for developing robot software applications. It started as an open-source project in 2007, and is a mainstay in robotics research because of its ease-of-use and open-source hackability. As a result, it has grown to include tools for 3D simulation and visualization, route planning, pose estimation and support for nearly every type of robotic arm, actuator, gripper, etc. While the tools in ROS are impressive, the performance and scalability of ROS itself could not keep pace with the needs of next-gen robotics applications, such as autonomous vehicles, multi-robot swarms and operating in distributed environments. ROS was designed to control a single robot from a desktop Linux environment, but these new applications required real-time performance with safety-critical implications, and potentially to operate in a distributed environment with limited memory and unreliable networking.



Quote for the day:


"Structure is more important than content in the transmission of information." -- Abbie Hoffman


Daily Tech Digest - October 25, 2018


Digital channels can provide an effective gateway to emotionally connect an organization to its consumers. Technology companies that are consumers’ favorite brands not only have best-in-class digital capabilities; they also do a superior job integrating digital and physical environments and integrating both strategically to foster an emotional connection. Amazon’s digital prowess allows customers to discover, research, and buy products in minutes, while enabling its physical supply chain to deliver the goods most efficiently. Merging the physical with the virtual/digital is key to superior customer experience: putting the “real in digital and digital in real.” According to our survey, consumers are more likely to increase use of digital channels (both online and mobile) if banks increase security, provide more real-time problem resolution, and allow for more regular banking transactions to be handled digitally. On the other side, adding digital self-service screens at brick-and-mortar locations, or being able to connect with a bank representative virtually will increase consumers’ likelihood to use branches.



DevSecOps An Effective Fix for Software Flaws

Veracode judges the duration of a flaw based on how many times the same issue shows up in a scan after its initial discovery, Eng says. "If the flaw shows up three, four, five times, we can see that this was discovered in January, and you scanned it every month, and it's still there in May—then in June it goes away. So you assume that to mean that they closed it after four months," he explains. Eng's use of "months" as the time scale for remediation is not arbitrary. According to the Veracode research, more than 70% of all flaws were still present a month after initial discovery, and nearly 55% had not been remediated after three months. In fact, while roughly a quarter of all code flaws were dealt with inside 21 days, another quarter were still open issues after a year. The length of time from discovery to fix varied according to the flaw's severity — but not very much, Eng says. Based on a scale that rates the most severe issues a 5 and the least severe a 1, he explains, "We expected the fives to be fixed the fastest and then the fours, threes, twos, but it wasn't like that."


Cathay Pacific under fire over breach affecting 9.4 million passengers


Brian Vecci, technical evangelist at Varonis, said that as insiders and external actors get more sophisticated, organisations must be able to do a better job of detecting suspicious activity quickly and reducing the time it takes to investigate an incident. “Months went by between when this attack was apparently noticed and when investigators figured out sensitive data might have been stolen, and then almost half a year passed before it was announced,” he said. “That is unacceptable and highlights just how far behind the eight ball most organisations are when it comes to threat hunting and incident response.” The data breach includes 860,000 passport numbers, about 245,000 Hong Kong identity card numbers, 403 expired credit card numbers and 27 credit card numbers with no card verification value (CVV) that were accessed, although the airline claims no passwords were compromised. Breached data also includes passenger names, nationalities, dates of birth, telephone numbers, email and physical addresses, passport numbers, identity card numbers and historical travel information – all extremely valuable to cyber criminals for identity theft, phishing and fraud.


The US pushes to build unhackable quantum networks


The QKD approach used by Quantum Xchange works by sending an encoded message in classical bits while the keys to decode it are sent in the form of quantum bits, or qubits. These are typically photons, which travel easily along fiber-optic cables. The beauty of this approach is that any attempt to snoop on a qubit immediately destroys its delicate quantum state, wiping out the information it carries and leaving a telltale sign of an intrusion. The initial leg of the network, linking New York City to New Jersey, will allow banks and other businesses to ship information between offices in Manhattan and data centers and other locations outside the city. However, sending quantum keys over long distances requires “trusted nodes,” which are similar to repeaters that boost signals in a standard data cable. Quantum Xchange says it will have 13 of these along its full network. At nodes, keys are decrypted into classical bits and then returned to a quantum state for onward transmission. In theory, a hacker could steal them while they are briefly vulnerable.


Technology risks: What CIOs should know and steps they can take

CIOs should ensure that any new technology is only accessible to those who absolutely need it for their job, OpenVPN's Dinha recommended. Any access point should utilize two-factor authentication to keep hackers from taking control with brute-force attacks, and CIOs should educate their teams to make sure they understand technology risks and their role in protecting the company's data and privacy, he said. "Have a clear policy on how cybersecurity is managed with each individual piece of new technology and educate everyone on the best practices," Dinha said. When developers are creating AI or task automation, CIOs should be wary of what shortcuts their teams take and what "Band-Aids" are being deployed, SiteLock's Ortega said. One major concern is to ensure that AI has access only to the data necessary to complete its assigned task, she explained. "Taking a proactive approach and instilling a culture of security awareness stops convenience from becoming dangerous, keeping sensitive data safe at every level," Ortega said.


How 802.11ax Improves the Experience for Everyone

istock 926538832
In 802.11ax OFDMA, the access point assigns client traffic to sub-channels, not just for the downlink but also for the uplink. The new ‘trigger frame’ mechanism allows the access point to poll clients to discover what traffic they wish to send on the uplink. When it collects the trigger frame responses, it designs a schedule and sends it to clients in another trigger frame. Clients then construct frames according to their instructions, setting data rates, transmit levels and sub-channels, and transmit data frames back to the access point. The other multi-user mechanism in 802.11ax is multi-user MIMO. This uses the same trigger-frame control protocol as OFDMA, improving on 802.11ac. Multi-user MIMO is itself a complex protocol, requiring sounding packets to determine multipath conditions and group MIMO clients, all under the control of the access point. At any moment, the access point can choose to use traditional single-user transmissions, or multi-user, with OFDMA or MIMO. This opens new dimensions in traffic management.


Bridging the IT Talent Gap: Find Scarce Experts

The technology industry's unemployment rate is well below the national average, forcing companies to compete aggressively for top talent. When presented with a range of recruitment strategies by a recent Robert Half Technology questionnaire — including using recruiters, providing job flexibility and offering more pay — most IT decision makers said they are likely to try all approaches in order to land the best job candidates for their teams. ... Look beyond the typical sources, suggested Art Langer, a professor and director of the Center for Technology Management at Columbia University and founder and chairman of Workforce Opportunity Services (WOS), a nonprofit organization that connects underserved and veteran populations with IT jobs. "There is a large pool of untapped talent from underserved communities that companies overlook," he explained. Businesses are now competing in a global market. "New technology allows us to connect with colleagues and potential partners around the world as easily as with our neighbors," Langer said. "Companies hoping to expand overseas can benefit from employees who speak multiple languages."


Scaling Agile in a Data-Driven Company


Agile is, first of all, a mindset: practicing Agile is not being Agile. Changing and evolving the organization's mindset was not easy. Understanding and assimilating the values and principles of the Agile Manifesto requires exercise, practice, patience and a continuous work with people and on the company culture. Aspects such as micro-management and the continuous push on the teams were part of our daily life, and only with continuous coaching and on-the-job training we managed to bring out the value and the trust of an empowered and autonomous team. Also the interpretation of the roles of Product Owner and Scrum Master were very difficult at the beginning: the PO was often focused more on “How to do” instead of “What to do”, while the Scrum Master who came from Technical Leaders often did not focus on their role as facilitators/ Servant Leaders. It was important to understand the essence of the two roles in Scrum. An Agile transformation is first of all a cultural transformation, then it also becomes a process change; the process is the child of culture. 


U.S. state banking regulators sue government to stop fintech charters

A body of U.S. state banking regulators on Thursday sued the federal government to void its decision to award national bank charters to online lenders and payment companies, saying it was unconstitutional and puts consumers and taxpayers at risk. The Conference of State Bank Supervisors (CSBS) said it had filed a complaint in the U.S. District Court for the District of Columbia against the Office of the Comptroller of the Currency (OCC) over its plan, announced in July, to issue bank charters to financial technology firms. “Common sense and the law tell us that a nonbank is not a bank. Thus, CSBS is calling on the courts to stop the unlawful, unwarranted expansion of powers by the OCC,” John Ryan, CSBS president and CEO said in a statement. Fintech firms have long pushed for national bank charters to let them operate nationwide without needing licenses in every state, a process they say can impede growth and boost costs. OCC spokesman Bryan Hubbard did not immediately respond to a request for comment. The regulator has previously said it would vigorously defend its authority to grant the charters.


Defense, security and the real enemies

intro cyber attack maps
Recognize the dangers presented by these countries at all levels of government. This is one of the times where party affiliation or stances on issues do not matter. We need to take the agencies and people we’ve empowered with H.R. 1616 - Strengthening State and Local Cyber Crime Fighting Act of 2017, and Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, both of which have been signed by President Trump, and make actual protection the national priority. The latter bill is very comprehensive and provides an excellent start as to what companies should be doing.  We need to plan to protect what we deploy as part of how we implement technology and plan to keep the technology as current as possible and most importantly well-protected with an engaged team. We make it easy for Moscow, Beijing or Pyongyang when we don’t protect ourselves. Many of these successful attacks take advantage of long-standing security holes to devastating effect.



Quote for the day:


"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch


Daily Tech Digest - October 24, 2018


Despite the well-publicized growth in cyber-attacks every year, both in number and complexity, organizations are still struggling to implement effective security policies. It’s no secret that weak passwords are a leading security threat and bad password habits are far too common. Yet organizations are struggling to quantify their own level of password risk, even those that use password managers. Why? They lack proof of their policies’ effectiveness. They’re missing visibility into their employees’ behaviors. And they can’t verify how they compare to others of similar size, industry or location, including competitors. That is why we undertook an effort to analyze the password habits of employees at 43,000 organizations of all sizes and across industries that use the LastPass password manager. Not only does the report reveal real password behaviors in the workplace, but it offers the first true benchmark that CISOs and other IT professionals can use to see how they rank compared to other similar businesses and how to improve their password security.



Is the IoT in space about to take off?
Last month, cloud leader Amazon Web Services (AWS) struck a deal with satellite provider Iridium to “bring internet connectivity to the whole planet.” The deal calls for them to develop a satellite-based network called CloudConnect, designed specifically for IoT applications. Similarly, earlier this month, U.S.-based Orbcomm, which provides satellite IoT and machine-to-machine communications services, announced it will work with Asia Pacific Navigation Telecommunications Satellite (APNTS) to provide its services in China. Also in October, SemTech and Alibaba Cloud agreed to develop an IoT network in China using small satellites in low Earth orbit — reportedly just two of many companies looking to build such networks. The IOTEE Project (Internet of Things Everywhere on Earth), for example, has been funded by the European Union to provide IoT LPWA services from space. It’s unclear whether it’s the right time for these efforts to come to fruition. There is a market available: It turns out that despite their rapid proliferations, conventional terrestrial networks cover only a small percentage of Earth’s surface.


The issue was in the source code of the jQuery File Upload plugin, originally developed by Tschan, so the vulnerability could affect many other projects. According to GitHub, jQuery File Upload is the most starred -- meaning users mark it in order to signal interest and support -- jQuery plugin and also the most forked. Cashdollar said the plugin has been forked more than 7,800 times and could have been built in to thousands of other projects, making it difficult to determine how widespread the jQuery plugin vulnerability could be. "Unfortunately, there is no way to accurately determine how many of the projects forked from jQuery File Upload are being properly maintained and applying changes as they happen in the master project," Cashdollar wrote. "Also, there is no way to determine where the forked projects are being used in production environments if they're being used in such a way. Moreover, older versions of the project were also vulnerable to the file upload issue, going back to 2010."


How science can fight insider threats

Detecting insider threats using conventional security monitoring techniques is difficult, if not impossible. ... The emerging field of security analytics uses machine learning technologies to establish baseline patterns of human behavior, and then applies algorithms and statistical analysis to detect meaningful anomalies from those patterns. These anomalies may indicate sabotage, data theft, or misuse of access privileges. This can be accomplished by establishing a contextual linked view and behavior baseline from disparate systems including HR records, accounts, activity, events, access repositories, and security alerts. This baseline is created for the user and their dynamic peer groups. As new activities are consumed, they are compared to the baseline behaviors. If the behavior deviates from the baseline, it is deemed an outlier. Using risk scoring algorithms, outliers can be used to detect and predict abnormal user behavior associated with potential sabotage, data theft or misuse.


Datacentre glitches expose data loss risks

The research found that 29% of respondents had suffered one or two events of data loss because of their datacentre provider letting them down – with 18% saying they had suffered data losses on three or more times during the past 12 months. Jon Arnold, managing director at Volta Data Centres, said: “Outages and data loss can be due to a variety of factors, such as network glitches, human error or inadequate maintenance, but whatever the reason, organisations need to be taking a far more robust approach to datacentre due diligence. “Where is the guarantee of 100% uptime? What power resilience is in place? How many different connectivity options are available – and do they run across different networks for greater contingency? These are all questions businesses need to ask when choosing datacentre providers – or face the risk of more downtime.” The survey also showed that 35% of organisations still locate IT assets mainly on-premise, with 29% shifting mainly to the cloud.


Culture the missing link for cybersecurity's weakest link

Hibbs said that if he could take the humans out of the loop then the risk would drop to zero, but obviously that's incompatible with the reality of human communication within and between organisations made of, you know, humans. "I think we'll always be in that state. While we do need to make them more vital team members, we need to change the culture, which is very critical to reduce it, but there'll always be that risk." But focusing on phishing awareness training and the like is "too much of a tactical response", according to Valerie Abend, who heads up Accenture's global cyber regulatory services. "In order for us to get ahead of it, more than just focusing on that phishing aspect and not further risk, the bad guys are just going to keep outsmarting us. We have to be a little bit strategic on where we're focusing raising the level of attention and awareness," Abend said. Awareness-raising and anti-phishing campaigns are important, she said, but organisations need to raise the level of board and senior management involvement in managing the risk.


Google just quietly gave us a killer midrange Android option

Google Pixel Midrange Android
Google itself is selling the Pixel 2 for $649, in its lowest configuration, and you can find mint-quality used models in the $400 range. Those prices only seem likely to inch downward as time wears on. But there's more: Just think how this situation will spread starting nextyear, when the Pixel 2 will be two years old and yet still have a full year of pending updates under its belt. You'll essentially have a menu of pricing points available for any budget: the current-gen model, with three full years of updates included; the previous-gen model, with two solid years of support still ahead; and the two-year-old version, with a year's worth of foundational improvements still remaining. Google's software focus is thus not only altering the lifespan and value of a flagship phone; it's also completely changing what it means to get a midrange or budget-level phone, thanks to that cascading effect. And even if Google itself doesn't opt to keep selling those older models after a while, the used phone marketplace will provide an intriguing new level of aftermarket value.


8 ways to successfully get AI and analytics into production


“When you build a production analytic or AI system, there are two parts of the problem. One is having the right data and data access, and the other part of the problem is the analytics: actually running the software to analyze the data. Analytics applications require a lot of coordination, and with the increasingly widespread containerization of applications, it’s essential to have a way to coordinate processes running in containers. Kubernetes, an open-source orchestration system for managing deployment of containerized applications, is emerging as a leading solution. But to avoid being limited as to which applications can be containerized, you need a data platform with the capability to persist data (state) from containerized applications as a variety of data structures. This powerful combination of Kubernetes and an appropriate data platform offer a big advantage for production systems.”


GreyEnergy threat group detected attacking high-value targets


Cherepanov and Lipovsky said the similarities between GreyEnergy and BlackEnergy -- overlap in malware frameworks and code, overlap in targets and regions of activity, the timing of GreyEnergy beginning activity and both groups using active Tor relays for command and controlservers -- all indicate that GreyEnergy is the successor to BlackEnergy. However, although experts praised the research by ESET, not all agreed that the evidence supported the connection between the groups or any conclusions that GreyEnergy is specifically targeting ICS infrastructure. Robert Lee, founder and CEO of Dragos Inc., noted on Twitter that the GreyEnergy "tool is a general backdoor and doesn't contain ICS capabilities but neither did BlackEnergy3." "I think it's premature to make assessments on adversary intent, with only three identified victims the focus may be larger than ICS and assessing how the adversary might use the access would be low confidence at best," Lee wrote on Twitter.


CIOs and the cloud: The future of European enterprise software


“The cloud helps when providing compliance in terms of GDPR and governance,” he said. “If we didn’t use the cloud, I’m not even sure how we’d tackle those requirements. Because we use the cloud, we’ve had to work out where all our data resides and that means we’re in a great place in terms of security and legislation.” “We know where our information sits and we can then just apply policies as we need to. Speaking to other CIOs, I don’t think other businesses in other sectors are always in that position. That’s a living nightmare.” That view resonates with Martyn Wallace, chief digital officer for the Scottish Local Government Digital Office. Like Dowden, Wallace believes too many executives fear going all-in with the cloud and believe information is only safe in an internal data centre. Naysayers should recognise the power of working with a technology specialist like Amazon, Google or Microsoft, who have the weight to ensure data stays safe and secure.



Quote for the day:


"The most common way people give up their power is by thinking they don't have any." -- Alice Walker