Daily Tech Digest - April 05, 2019

India’s new Software Products Policy marks a Watershed Moment in its Economic History


It is in this light that the recently rolled out National Software Products Policy (#NSPS) by the Ministry of Electronics & IT (MeitY), Government of India marks a watershed moment. For the very first time, India has officially recognised the fact that software products (as a category) are distinct from software services and need a separate treatment. So dominated was Indian tech sector by outsourcing & IT services, that “products” never got the attention they deserve – as a result that industry never blossomed and was relegated to a tertiary role. Remember that quote – “What can’t be measured, can’t be improved; And what can’t be defined, can’t be measured”. The software policy is in many ways a recognition of this gaping chasm and marks the state’s stated intent to correct the same by defining, measuring and improving the product ecosystem. It’s rollout is the culmination of a long period of public discussions and deliberations where the government engaged with industry stakeholders, Indian companies, multinationals, startups, trade bodies etc to forge it out.


How Lessons from Production Adoption Resulted in a Rewrite of the Service Mesh

Linkerd is an open-source service mesh and Cloud Native Computing Foundation member project. First launched in 2016, it currently powers the production architecture of companies around the globe, from startups like Strava and Planet Labs to large enterprises like Comcast, Expedia, Ask, and Chase Bank. Linkerd provides observability, reliability, and security features for microservice applications. Crucially, it provides this functionality at the platform layer. This means that Linkerd’s features are uniformly available across all services, regardless of implementation or deployment, and are provided to platform owners in a way that’s largely independent of the roadmaps or technical choices of the developer teams. For example, Linkerd can add TLS to connections between services and allow the platform owner to configure the way that certificates are generated, shared, and validated without needing to insert TLS-related work into the roadmaps of the developer teams for each service.


How to get your company’s people invested in transformation


Transformation, driven by new industrial platforms, geopolitical shifts, global competition, and changing consumer demand, is front-page news because it moves share prices, tests leadership ability and mettle, and creates new business models that change how whole sectors operate. But we rarely talk about the people who live through and help drive these often-wrenching changes. Can a global company successfully transform without bringing along its 30,000 employees? I doubt it. The human dimension is profoundly important. But too often, it’s forgotten or under-recognized in the rush to restructure or launch initiatives. Leaders who can engage emotionally with employees and humanize change initiatives by creating inspiration and innovation are most likely to succeed. This may sound obvious, but is a challenge for type A leaders who overly emphasize process, effort, and control. Transformational change often requires leaders to adopt an “antihero” style, characterised by empathy, humility, self-awareness, flexibility, and an ability to acknowledge uncertainty.


Secure Your Migration to the Cloud


Having a clearly defined and enforceable data lifecycle strategy, ensuring data is protected in transit and at rest, is one of the most important aspects of any cloud migration. You need to understand what sensitive data you are migrating and leverage the tools and processes to keep it protected, including cloud access security brokers (CASB). A cloud access security broker, according to Gartner, “is an on-premises or cloud-based security policy enforcement point that is placed between cloud service consumers and cloud service providers to combine and interject enterprise security policies as cloud-based resources are accessed.” CASBs are powerful tools because they give you a centralized view of all your cloud resources. Many IT teams that deploy a CASB for the first time realize that there are many cloud resources in use that they were previously unaware of, some of which may be placing sensitive data at risk. By using CASBs and other tools, you can regain visibility into where data resides and apply the proper safeguards to keep it protected.



The Matrix at 20: A Metaphor for Today's Cybersecurity Challenges

Shape-shifting is core to the movie's plot. "Agents," Neo's sinister enemies, take over the bodies of innocent bystanders in their relentless pursuit of Neo and his crew. The cybersecurity analogy here is an advanced persistent threat (APT) group utilizing stolen credentials to gain a foothold into an organization — one of the most pernicious elements facing today's enterprise. Modern breaches often involve malicious APT-like agents gaining access to an employee's credentials in order to achieve their goal. This usually happens as a result of spearphishing attempts, enabling attackers to steal customer data, intellectual property, or financial and banking data. Just as Neo stays vigilant in looking for constant threats, CISOs fight the epidemic of stolen credentials with proactive risk-based authentication techniques that stop attackers from even obtaining a foothold in the first place. The key in both situations is having visibility into attacker behavior. As Neo begins his journey to the "real world," a jaded crew member, Cypher, asks, "Why, oh why, didn't I take the blue pill?"


Bolster enterprise application support from dev to deployment


Declarative models help teams discover how code has deviated from the declared goal state. Use the diff feature in these tools -- some examples include kubectl diff for Kubernetes and the --diff option in Ansible playbooks -- to compare current and goal-state conditions and alert you to differences. Jenkins is a common automation server to integrate CI/CD with a repository, with competitors such as Integrity, GoCD or GitLab CI/CD. When choosing a tool, ensure that it will integrate with your organization's repository to maintain that single source of truth. There are broad management toolkits available for IT organizations that don't create tool integrations internally. For example, Weaveworks offers an integrated tool and a GitOps-centric distribution of Kubernetes, and it recommends and supports tool integrations for the latter. Diamanti and Rancher Labs have similar container platform capabilities. Start a GitOps approach to enterprise application support with a review of the components and capabilities of a managed toolkit, then weigh the benefits of a single-source option or put together a collection of tools to meet your specific requirements.


AI pioneer: ‘The dangers of abuse are very real’

Killer drones are a big concern. There is a moral question, and a security question. Another example is surveillance — which you could argue has potential positive benefits. But the dangers of abuse, especially by authoritarian governments, are very real. Essentially, AI is a tool that can be used by those in power to keep that power, and to increase it. Another issue is that AI can amplify discrimination and biases, such as gender or racial discrimination, because those are present in the data the technology is trained on, reflecting people’s behaviour. ... Deep learning, as it is now, has made huge progress in perception, but it hasn’t delivered yet on systems that can discover high-level representations — the kind of concepts we use in language. Humans are able to use those high-level concepts to generalize in powerful ways. That’s something that even babies can do, but machine learning is very bad at.


Reliance Jio’s latest acquisition is a $100M bet on the future of internet users in India

Reliance Jio Become Number Two In India
Jio’s aggressive data plan strategy, which started with free voice calls and free 4G data, disrupted India’s telecom market and forced the incumbents to move quicker and reduce prices — mobile data is reportedly now cheaper in India than anywhere else on the planet. It was, of course, a huge hit with consumers. The operator has consistently led on 4G subscriber numbers and it is ranked third overall with over 280 million customers, or around 23 percent market share. Clearly, keeping up with what’s next is a critical part of its plan to grow bigger still. Vaish said Haptik wasn’t under pressure to sell but the team found an “ideal match in terms of philosophy” with Jio, which is also exploring alternative ways to enable consumers to interact with its devices and service. The company has a ‘Hello Jio’ assistant on its devices, and Haptik may help it further its strategy in the future although Vaish said that hasn’t been nailed down at this point. Jio is allowing Haptik to continue to work with customers because, at this point, enterprise services are the “only proven business” for conversational platforms, Vaish said.



Sorry, graphene—borophene is the new wonder material that’s got everyone excited


This exotic substance wasn’t synthesized until 2015, using chemical vapor deposition. This is a process in which a hot gas of boron atoms condenses onto a cool surface of pure silver. The regular arrangement of silver atoms forces boron atoms into a similar pattern, each binding to as many as six other atoms to create a flat hexagonal structure. However, a significant proportion of boron atoms bind only with four or five other atoms, and this creates vacancies in the structure. The pattern of vacancies is what gives borophene crystals their unique properties. Since borophene’s synthesis, chemists have been eagerly characterizing its properties. Borophene turns out to be stronger than graphene, and more flexible. It a good conductor of both electricity and heat, and it also superconducts. These properties vary depending on the material’s orientation and the arrangement of vacancies. This makes it “tunable,” at least in principle. That’s one reason chemists are so excited. Borophene is also light and fairly reactive. That makes it a good candidate for storing metal ions in batteries.


Discovering Culture through Artifacts

First of all, managers have power over their team. This power often takes the form of rewards (pay raises, promotions, etc) and punishments (bad performance reviews, terminations, etc.). In both cases, a manager is rewarding and punishing members of his team based on the behavior. It is through these incentives and disincentives that the culture of a team, organization and/or company is defined. Members of the team learn through observation of which behavior is rewarded/punished, and tailor their own behavior in turn. Interestingly enough, it doesn’t matter what a manager says, but rather which behaviors they reward/punish. A second means by which a manager can influence the culture of a team is by modeling the behavior they want the team to exhibit. One can learn a lot by observing the behavior of others, especially when that person is in a position of power or influence. I see this a lot when it comes to modeling behavior around giving and receiving feedback. Great managers know how to listen and thank people for feedback, and modeling this behavior for their team.



Quote for the day:


Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance. - Thom S. Rainer


Daily Tech Digest - April 04, 2019

Is it too soon for AI in the education landscape?


Even if schools did have enough money, not only is their choice of software limited, but many heads and teachers are neither trained nor qualified to either select or use even basic educational technology, let alone AI tools. There is also a widespread fear of the unknown, part of which includes the much-discussed issue of jobs being automated out. Another major concern relates to ethics, believes Elena Sinel, who is a member of the All Parliamentary Group on AI and also founder of Acorn Aspirations and Teens in AI, which provide various forums for young people to learn tech skills. A key challenge in this context is in ensuring AI does not end up doing “more harm than good”, she says. “So it’s about looking at who is accountable if things go wrong – for example, what happens if there’s a data leak and who is ultimately in charge of the data? Or what happens if AI doesn’t assess students fairly or accurately in exams, for instance?” says Sinel. Such questions also fit into a wider debate around whether schools are currently set up to provide young people with the skills required for the workplace of the future, or whether fundamental change is required.



Prepare Now for Next-Generation Cyber Threats

Impacts will be felt across a range of industries. Malicious attacks may result in automated vehicles changing direction unexpectedly, high-frequency trading applications making poor financial decisions, and airport facial recognition software failing to recognize terrorists. Where machine learning systems are compromised, organizations will face significant financial, regulatory, and reputational damage, and lives will be put at risk. Nation states, terrorists, hacking groups, hacktivists, and even rogue competitors will turn their attention to manipulating machine learning systems that underpin products and services. Attacks that are undetectable by humans will target the integrity of information. Widespread chaos will ensue for those dependent on services powered primarily by machine learning. Companies should assess their offerings and dependency on machine learning systems before attackers exploit related vulnerabilities.



Rethinking reskilling: How to find key hidden talent within your organization  


To overcome the talent gap and foster adaptive workforces able to keep up with ongoing transformations in tech and industry, there is a clear need to shift from traditional L&D techniques like seminars and online training sessions, to leveraging existing experts within the organization so that we harness the collective intelligence of individuals and teams. These are employees who often already have the skills and knowledge that others need and follow the development of those fields closely. As a result, they can curate and contextualize that knowledge better than any external teacher, hence making it easier to for others to absorb it. Companies also need to tap what can be a hidden resource of knowledge, identifying “invisible” go-to resources; i.e., knowledgeable employees who may be currently unrecognized or perhaps are not even hierarchically high in the company structure, but seem to be go-to people for large networks of employees. Organizations can consider practices similar to Genpact’s Genome reskilling initiative, which uses advanced human network analysis techniques to identify these invaluable knowledge leaders outside of the usual suspects of widely known company subject matter experts.


A Framework for High-Value Big Data

More and more companies are achieving the monetization of data by improving efficiencies, developing new products, growing new markets, and by reducing risks. Saxena talked about Netflix's original series like Orange is the New Black that are a direct result of data-driven innovation. She elaborated on the big data framework elements. Organization maturity is about hard assets in an organization, like its strategy, data, quality etc. Every organization should have a business strategy, as well as a data strategy. The internal competencies are about people, and focus on soft assets like leadership, engagement, and adaptability. Health care organizations in the field of precision health like Geisinger are taking advantage of big data and genomic sequencing to transform healthcare practices, in order to prevent people from becoming sick and to treat people more as individuals (customers), rather than just patients. Data governance initiatives should include aspects of data integration, quality, accessibility and data security.


Leading DevOps program Chef goes all in with open source


What does that mean for Chef's customers? Jacob said, "Chef Software produces only open-source software projects, in the commons. It distributes that software as an enterprise product. For current Chef Software customers, nothing changes. For enterprise users of Chef products who are not customers, they can decide to either pay for Chef's distribution, or they can make or consume an alternative." Going deeper in the new Chef FAQ, Chef stated: "We will begin to attach commercial license terms to our software distribution (binaries) with the next major release." So, if you download and compile the code yourself, you're welcome to use it. But, if you download the binaries, you'll must pay for them. If that sounds familiar, it should. It's a variation of how Red Hat and SUSE, for example, release their enterprise Linux distributions.  . . . For existing commercial customers there will be no immediate changes until their next renewal when they will get licensed onto new SKU's representing the same core products."


Bitcoin, BlackRock And The Rise Of Alternatives

Bitcoin
As an alternative asset, the appeal of crypto is that its movements are uncorrelated with the rest of the market, says Mark Yusko, CEO of Morgan Creek Capital Management, which oversees $1.5 billion in assets, including a $40 million blockchain-focused VC fund. “Stocks or bonds derive their value from factors like GDP growth, profitability and interest rates. A cryptocurrency network derives its value from usage growth, adoption, regulation and technology. All of those things are uncorrelated with traditional measures of stocks and bonds.” ... Yusko claims inbound interest from institutional investors is growing. This week, he’s meeting with a California municipal pension fund. He adds that more institutional-investor conferences are including talks on cryptocurrencies. Teddy Fusaro, chief operating officer of Bitwise, a San Francisco digital asset manager and creator of the first crypto index fund, says institutional investors are showing increasing sophistication. “A year ago,” he says, “the conversation might have been, ‘How do we know bitcoin is going to survive?’ Or ‘Who is the CEO of bitcoin?’


6 Essential Skills Cybersecurity Pros Need to Develop in 2019

Image Source: Adobe Stock (vchalup)
On their face, these stats may engender a bit of complacency from cybersecurity professionals. It would only be natural to figure that anybody with a pulse and some security experience has got it made. But here's the rub. Many disruptive forces are at play that are set to drastically change the way security duties are carried out in the coming years. New security automation platforms, new architectures, and complex hybrid cloud implementations require major shifts in bread-and-butter security technical knowledge. Not only is security technology changing rapidly, but so are many of the fundamental roles held by cybersecurity professionals. Tons of emerging technologies and pervasive use of the Internet of Things are touching every aspect of business operating models, and software delivery is becoming more agile and embedded into lines of business. As a result, security pros are tasked to take positions requiring more consultative leadership and more enablement of democratized security across the organization.


What Is a Scaleup Company and How Is It Different from a Startup?

mimi-thian-scaleup-startup-company-definition-vc-investment-article-explanation-two-women-working-together-laptops
From a venture capital and entrepreneurial perspective, a scaleup company is considered to be in a later growth phase, after successfully maneuvering through the period of being a startup and having established a sustainable business model with a positive outlook on organizational growth and improvements of the profitability. For additional information on this aspect, you can also have a look at “How to Upscale Like a Boss“. It does not take much to “found” a startup company. Anybody with an interesting idea can register a company which then could be considered to be a startup. It then either fails or becomes successful after a lot of hard work. The question is more on… When does a company stop being a startup? As soon as the startup company has finished an MVP (minimum viable product) and has a stable monthly income, which is hopefully more than the company’s expenses, the organization ceases to be a startup. And that’s a good thing. Being a startup company is nothing good and nothing aspirational. To read more about the exit of this phase, you can also read our article “When Does a Company Stop Being a Startup?“.


Joining Human And Artificial Intelligence

Human and artificial intelligence
Although the aim of AI is to imitate HI to the point where both are indistinguishable, AI and HI are fundamentally different. Human intelligence learns via the senses and past experiences. They are also emotionally intelligent, which is something that AI is yet to crack. But AI is analytical and logical in a way that humans aren’t, and with this, it is capable of formulating and processing in ways that humans can’t. AI can take huge datasets and whittle them down to snippets of relevant information quickly. It can complete tasks in minutes as opposed to days, and it can identify data discrepancies that humans would never spot. Artificial and human intelligence is a match made in business heaven. The AI-HI model is already in practice across a number of sectors. In healthcare, clinical decisions are aided by artificially intelligent systems that search through historical data at a pace that human professionals never could. But, that said, getting a diagnosis direct from AI would be a very different experience to getting it from a doctor or nurse. Naturally you need both – AI augmenting human intelligence can lead to increased efficiency and accuracy.


How the data mining of failure could teach us the secrets of success

Since learning should reduce the number of attempts required before achieving success, it should lead to a narrower distribution of failure streaks than the exponential form predicted by the chance model. But to the surprise of Yin and co, failure streaks do not follow this pattern either. In fact, they have a much fatter-tailed distribution. “These observations demonstrate that neither chance nor learning alone can explain the empirical patterns underlying failures,” the researchers say. So what other factors are important? To find out, Yin and co modeled the way people learn from experience and how this influences their next attempt. In particular, they modeled whether people take into account all their previous experiences or just some of them. The resulting model considers a complete range of learning—from agents who take all their past experience into account to those who do not take any of their past experience into account, and everything in between. The team say the model predicts a phase change in the behavior that matches the empirical data.



Quote for the day:


"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne


Daily Tech Digest - April 02, 2019

Adopting cloud is not simply a case of lifting and shifting workloads to a designated cloud provider; it also encompasses working out the migration costs of moving infrastructure to the cloud. Also, the applications earmarked for migration also need to be developed for use in the cloud, and companies trying to retrofit their existing ones to fit such an environment will have a huge uphill battle. For that reason, administrators working in greenfield sites have a major advantage over those dealing with brownfield infrastructure. And planning is the absolute make or break requirement for a successful cloud deployment. It is important to be realistic about application requirements. It may be simple to say “scale as required”, but that usually comes with a cost that needs to be worked out ahead of time – not just the actual instance cost, but also the technological development and technical debt it will incur. Scaling cannot just be thrown out ad hoc – testing, testing and more testing is key. Also, not everyone needs auto-scaling, so be honest about the organisation’s requirements. Features cost money, and waste money when they are not used.


The Impact and Ethics of Conversational Artificial Intelligence

Both a recent study from Carnegie Mellon University and a recent Amazon patent for "Voice-based determination of physical and emotional characteristics of users" indicate that far more information can gleaned from your voice than you thought possible. Perhaps you could already guess that voice analysis can reveal things like your gender or emotions. Do you realize that your height, weight, physical health, mental state, and physical location could also be confidently determined? The Carnegie Mellon study suggested they could even make a fairly accurate 3-D representation of your face, just from your voice. However, while Carnegie Mellon suggests that this could be used for law enforcement such as for identifying hoax callers, Amazon is planning to use it to tailor purchase suggestions — for instance, offering to sell you cough drops if it recognizes that you have a cold. Using this type of analysis would allow our digital assistants to be much more in tune with us. Amazon announced in 2018 that Alexa was going to start acting on “hunches” so that it would every so often make an unprompted suggestion.


Hackers reveal how to trick a Tesla into steering towards oncoming traffic


The problem lay within the single neural network which Tesla uses to detect lanes, among other functions. Images from a camera are processed, input into the network, and output is then saved and added to a virtual map of the vehicle's surroundings. While a controller manages the car's auto-steering decisions, the researchers created an attack scenario in which the feed images were compromised by way of three stickers on the road, which led to the car's trajectory changing. By applying small, inconspicuous stickers to the road, the system failed to notice that the fake lane was directed towards another lane -- a scenario the team says could have serious real-world consequences. The vulnerability and security weaknesses found by Tencent were reported to Tesla and have now been resolved. The findings were shared with attendees of Black Hat USA 2018. "With some physical environment decorations, we can interfere or to some extent control the vehicle without connecting to the vehicle physically or remotely," the team says.


Meta Networks builds user security into its Network-as-a-Service


Ever since its launch about a year ago, Meta Networks has staked security as its primary value-add. What’s different about the Meta NaaS is the philosophy that the network is built around users, not around specific sites or offices. Meta Networks does this by building a software-defined perimeter (SDP) for each user, giving workers micro-segmented access to only the applications and network resources they need. The vendor was a little ahead of its time with SDP, but the market is starting to catch up. Companies are beginning to show interest in SDP as a VPN replacement or VPN alternative. Meta NaaS has a zero-trust architecture where each user is bound by an SDP. Each user has a unique, fixed identity no matter from where they connect to this network. The SDP security framework allows one-to-one network connections that are dynamically created on demand between the user and the specific resources they need to access. Everything else on the NaaS is invisible to the user.


Why so many organizations sideline Internet of Things strategies

Any discussion about the IoT starts with a simple but often overlooked fact: Objects and assets possess no inherent intelligence. It’s all about the “smarts” humans build into them. Consequently, a dozen — or a million — smart devices operating within separate but disconnected systems won’t have the same impact and value as a collection of devices and systems that work together synergistically. In order to slide the dial from tactical to strategic, an enterprise must focus on identifying value points, determining how data can help unlock that value, and connecting the right devices and systems in the right way. When an enterprise pinpoints value — for customers, employees, partners and others — it suddenly holds a map and a compass that points to specific devices, tools, technologies and solutions. However, an IoT platform must also be flexible and agile enough to support changes in devices, software and the overall business environment. Fast pivots and modular deployments — what many describe as agile environments — are now paramount.


Why women still make up only 24% of cybersecurity pros

istock-949581062.jpg
Despite more women entering and succeeding in the cybersecurity field, pay inequalities persist, the report found. While 29% of men in the field report annual salaries between $50,000-$90,000, only 17% of women do the same. Some 20% of men in cyber earn between $100,000-$499,999, compared to 15% of women. Both male and female cybersecurity professionals share many of the same concerns about their roles, including lack of commitment from upper management, the reputation of their organization, the risk of seeing their job outsourced, a lack of work-life balance, the threat of artificial intelligence (AI) reducing the need for their role, and a lack of standardized cybersecurity terminology to effectively communicate within their organization. "It's an encouraging sign that more women are succeeding in cybersecurity and moving up through the ranks," Jennifer Minella, vice president of engineering and security at Carolina Advanced Digital, Inc. and chairperson of the (ISC)² board of directors, said in the release.


Zuckerberg calls for new internet regulation


Zuckerberg said effective privacy and data protection required a globally harmonised framework. “People around the world have called for comprehensive privacy regulation in line with the European Union’s General Data Protection Regulation (GDPR), and I agree. I believe it would be good for the internet if more countries adopted regulation such as GDPR as a common framework.” New privacy regulation around the world, he said, should build on the protections GDPR provides, it should protect individuals’ rights to choose how their information is used – while enabling companies to use information for safety purposes and to provide services – it should not require data to be stored locally, and it should establish a way to hold companies such as Facebook accountable by imposing sanctions when they make mistakes. “I also believe a common global framework – rather than regulation that varies significantly by country and state – will ensure that the internet does not get fractured, entrepreneurs can build products that serve everyone, and everyone gets the same protections,” Zuckerberg wrote.


Kubernetes Secrets Management

A Kubernetes Secret is mainly designed to carry sensitive information that the web service needs to run. This includes information such as username and password, tokens for connecting with other pods, and certificate keys. Putting sensitive information in a Secret object allows for better security and tighter control over those details. Secrets are also easy to integrate with existing services. You just have to tell the pods to use the custom Secrets you have created alongside the native Secrets created by Kubernetes. This means you can use Secrets to make deploying a web service across multiple clusters easier. It is also worth noting that Secrets can are base64 encoded for ‘encryption’ purposes. You can convert strings or values into base64 and revert them back before use. The encoding/decoding process is already built into Kubernetes, eliminating the need for third-party tools when adding this extra layer of security. Storing sensitive environment variables becomes more seamless. It’s important not to commit base64-encoded Secrets, as they can be easily decoded by anyone.


When Wi-Fi is mission-critical, a mixed-channel architecture is the best option

When Wi-Fi is mission-critical, a mixed-channel architecture is the best option
For many carpeted offices, multi-channel Wi-Fi is likely to be solid, but there are some environments where external circumstances will impact performance. A good example of this is a multi-tenant building in which there are multiple Wi-Fi networks transmitting on the same channel and interfering with one another. Another example is a hospital where there are many campus workers moving between APs. The client will also try to connect to the best AP, causing the client to continually disconnect and reconnect resulting in dropped sessions. Then there are environments such as schools, airports, and conference facilities where there is a high number of transient devices and multi-channel can struggle to keep up. ... There has been recent innovation from the manufacturers of single-channel systems that mix channel architectures, creating a “best of both worlds” deployment that offers the throughput of multi-channel with the reliability of single-channel. For example, Allied Telesis offers Hybrid APs that can operate in multi-channel and single-channel mode simultaneously.


Building High-Quality Products With Distributed Teams

Another aspect of making a high-quality product is the testing process. She mentioned having a mature testing process with the test plan and automatic, integration, load, and stress testing, which allows identifying the issues as soon as possible, not at the very last moment. Her advice on developing high-quality products is to make quality your priority and make decisions based on this priority. It means having a mature quality process and having the best software testing engineers in your team, she argued, and working with risks; not ignoring, but mitigating. Gorbachik suggested making daily decisions from your high-quality product perspective. For example, you have a choice: deliver the product earlier without automated tests coverage, or deliver the product later, but cover it with automated tests. If your main target is a high-quality product, then option 2 (deliver the product later, but cover it with automated tests) is your choice, she argued.



Quote for the day:


"Management is efficiency in climbing the ladder of success; leadership determines whether the ladder is leaning against the right wall." -- Stephen Covey


Daily Tech Digest - April 01, 2019

hack hacker cyber thief theft stolen
Instead of using wipers, Symantec reports that the group’s recent attacks are aimed at data exfiltration using vulnerabilities in a common piece of software. “The main point of entry in recent attacks has been spear-phishing emails capable of delivering malware to the recipient’s computer,” says Dick O’Brien, researcher at Symantec's Security Response. “The group has also attempted to exploit the recently patched WinRAR vulnerability attacks.” After sending phishing emails to targeted companies, the victim is encouraged to download a file, JobDetails.rar, which then tries to exploit vulnerability CVE-2018-20250 in WinRAR. A successful infection on an unpatched system allows an attacker to install any file on the computer. ... “Based on its tactics and targets, our assessment is that Elfin is a state-sponsored espionage group,” says O’Brien. “Given the nature of the group and its targets, we can only speculate that the information in question is likely to be of a strategic or economic interest to Elfin’s sponsors.”


identifier state machine
Lexing is the process of breaking an input stream of characters into "tokens" - strings of characters that have a "symbol" associated with them. The symbol indicates what type of string it is. For example, the string "124.35" might be reported as the symbol "NUMBER" whereas the string "foo" might be reported as the symbol "IDENTIFIER". Parsers typically use lexers underneath, and then compose the incoming symbols into a syntax tree. Because lexers are called in core parsing code, the lexing operations must be reasonably efficient. The .NET regular expression isn't really suitable here, and while it can work, it actually increases code complexity while diminishing performance. Included in this project is a file called "FA.cs" that contains the core code for a regular expression engine using finite automata which resolves to the humble yet ubiquitous state machine. Finite state machines are composed of graphs of states. Each state can reference another state using either an "input transition" or an "epsilon transition".


AI and data security: a help or a hindrance?

AI and data security: a help or a hindrance? image
Having the right technology in place is vital but companies need the right people to ensure it runs effectively. In a great deal of cases we’re seeing a shift in companies bringing senior security talent in-house rather than relying on external partners to bolster their security infrastructure. But organisations still have a long way to go when it comes to building security expertise from within. More than half, 52%, of respondents in a recent poll by Infosecurity Europe cited that they have a skill shortage in their organisation when it comes to preventing cyber attacks. Without the right team and technology, cyber attacks will only grow in severity. Neither can work effectively in isolation and those organisations that don’t invest in both will find out that the impact of a data breach goes far beyond fines. Businesses know that there is a high risk of cyber attacks and the majority are trying to build the right team and implement technology to tackle cyber security. But very few leaders truly understand where all data leak vulnerabilities exist and how to prevent them.


Creating HTML Layouts That Meet Web Accessibility Standards

Example of HTML Elements and ARIA Landmarks in a Page Layout.
Use ARIA landmarks across the web pages where appropriate. ARIA(Accessible Rich Internet Applications) is a comprehensive technical specification for adding accessibility information to elements that are not natively accessible (in particular, the ones developed with JavaScript, AJAX, and DHTML). With ARIA landmarks, a developer can extend HTML capabilities and apply proper semantics, i.e. properties, to UI and content elements for assistive technologies to understand these. Here is an example of how HTML semantic elements (<header>, <nav>, <main>, <footer>) are combined with ARIA role attributes (“banner”, “navigation”, “main”, “contentinfo”) to make website navigation using a screen reader easier for people with disabilities. Though most ARIA functions were recently implemented in HTML5 (and you should definitely favor these!), not all screen readers and browsers (e.g. IE) are sophisticated enough to depend on HTML semantics only.


Undertake software dependency management to reduce conflicts 


Since dependencies can take numerous forms, it's easy to end up with too many. When software depends on many packages or components, the application might have significant compatibility problems and can be plagued by long downloads, plus require lots of storage space. Similar problems occur with long dependency chains, where components depend on other components, and so on. Dependencies can conflict when multiple applications rely on different, incompatible versions of the same dependency. For example, if an application depends on component A.1 and another application depends on component A.2 but apps cannot install A.1 and A.2 together, a conflict occurs, and many conflicts are more convoluted than this example. In such circumstances, both apps cannot run on the same system at the same time, or the application with the older dependency might need an update to use the current dependency. Circular dependencies can affect software applications or constituent components.


Advancing OpenCL™ for FPGAs

Image 1 for Advancing OpenCLâ„¢ for FPGAs
Intel has created Intel® FPGA SDK for OpenCL™ technology, which provides an alternative to HDL programming. The technology and related tools belong to a class of techniques called high-level synthesis (HLS) that enable designs to be expressed with higher levels of abstraction. Intel FPGA SDK for OpenCL technology is now in widespread use. Amazingly for longtime FPGA application developers, the performance achieved is often close to―or even better than― HDL code. But it also seems apparent that achieving this performance is often limited to developers who already know how the C-to-hardware translation works, and who have an in-house toolkit of optimization methods. At Boston University, we’ve worked on enumerating, characterizing, and systematizing one such optimization toolkit. There are already a number of best practices for FPGA OpenCL documents. This work augments them, largely by applying additional methods well known to the high-performance computing (HPC) community2. In this methodology, we believe we’re on the right track.


Image: Production Perig - Adobe Stock
We are spending a lot of R&D time and effort figuring out what does that look like in our world. In our human resources products, we call it augmented intelligence. You can look at the data; you can discern certain things that are going on with your workforce such as diversity. Where you can get into augmented intelligence in a human capital management environment, you can literally train the product to tell you things about the workforce doing ongoing analytics. “With Intacct, we’ve talked a lot about artificial intelligence. When dealing with the close [for bookkeeping] especially for publicly traded companies, what if you could—over time through artificial intelligence—just always have an ongoing close? So it was never a monolithic event? Transactions were always updated. You had triggers that showed you potentially fraudulent transactions. You’re cleaning up your books as you go along. There is no notion of a period end close. You’re always closing. You could teach an AI engine how to do a continuous financial close. Those are the kinds of things we are trying to bring to bear within our products.


Artificial Intelligence is Really the Future? Let's Explore
The fresh recognition given to the pioneers of artificial intelligence, computer scientists Yoshua Bengio, Geoffrey Hinton and Yann LeCun with the Turing Award, an honour that is better known as technology industry’s version of the Nobel Prize has established that the world is acknowledging the relevance of emerging tech. AI has become a part of DNA for tech giants like Google. To maintain the sanctity of this technology and address the concerns around the ethics revolving around the growth of artificial intelligence, the company has created an Advanced Technology External Advisory Council to keep AI in check and shape the "responsible development and use" of AI in its products. Apart from being the fastest growing technologies in science, AI has taken the crown for being the front-runner for digital transformation, which has become a major part of every company’s agenda; 40 per cent of which is expected to be met by employing artificial intelligence. Smart assistants are fostering decision making procedure in diverse fields, from medicine, IT and education too.


Critical Magento SQL injection flaw could be targeted by hackers soon

Broken window with band-aid patch
Due to its popularity and the sensitive customer data it processes, the Magento platform is an attractive target for hackers and has been targeted in widespread attacks many times in the past. The number of attacks against online shops in general has increased over the past year, with some groups of hackers specializing in web skimming -- injecting rogue scripts on computers to capture credit card details. SQL injection vulnerabilities allow injecting data into or reading information from databases. Even if this particular flaw can't be used to infect a website directly, it can potentially give attackers access to accounts on a site. That access can then be used to exploit one of the other privilege escalation or code execution flaws that were patched in this release and which require authentication. "Unauthenticated attacks, like the one seen in this particular SQL Injection vulnerability, are very serious because they can be automated — making it easy for hackers to mount successful, widespread attacks against vulnerable websites," the Sucuri researchers warned. 


C# Futures: Deferred Error Handling

In order to use deferred error handling, a new compiler directive called “exception mode” is used. This switches the current function between structured exception handling and the new deferred mode. When using the deferred mode, the Exception.LastException property can be used to determine if an error has occurred. This stores only the most recent error, so if multiple errors occurred, all but the last will be lost. This has caused some concern, as it would mean one should check LastException after each line, which would be contrary to the goal of reducing the amount of code needed. To address this, an amendment to the proposal is to replace LastException with a stack is under consideration. ... The use of both structured and deferred error handling in the same function can be problematic from a compiler standpoint. Deferred mode fundamentally changes the way the code is compiled, much like how C# implements closures and async/await without CLR support.



Quote for the day"


"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell


Daily Tech Digest - March 31, 2019

Commonwealth Bank and Westpac cautious on using AI for compliance


"Regtech is at an early stage in its life cycle, but we are trying to get it mature, make it understood and get it linked into the strategic core – that is a big focus for us," Ms Cooper said. At the regtech event, a key frustration from start-ups was the time it took for new, outside technology to be considered by the ultimate decision-makers. "It's extremely hard. I've probably been in Westpac's building 30 times in the last year and a half or so," said SimpleKYC founder Eric Frost, whose system is being used by American Express to reduce customer on-boarding times. Neither he nor Westpac said they could disclose whether Westpac had started using the system. Ms Cooper said Westpac's senior management was encouraging collaboration with start-ups, but at the same time, compliance was an area where there was little room for failure. "There is a very strong strategic focus, right form the top, on partnering, not sitting in our ivory tower thinking we can build it ourselves, which creates the conditions for us to do things like 'minimum viable procurement'," she said.


Be Unreasonable in Pursuing Your Goals
The point is that you need a lot of quarters. You can’t rely on one of anything. Big dreams start with money because it’s measurable. Your parents told you to be reasonable, to play it safe. Rich people do not say "money is not everything."  Don’t be a victim. Don’t appoint blame to anyone other than yourself. Quit making excuses. Get your heart in the deal all the time. Have enough so that nothing can stop you. Embrace this thing called sales. Every business I ever started was built on making sales. No sales equal no business. You don’t need therapy, you need to take action. You don’t need to write a business plan or organize your address book. Without money coming in you’re dead in the water. The healing is in the doing, not the thinking. Sales is about doing. It’s the ask, the follow-up. Were you told not to be too persistent when you were a kid? Money doesn’t grow on trees is code for I don’t know how to bring money in. Replace the excuses with the truth. The truth is you’re lost when it comes to income. 


Wiring financial organisations for regulatory success

Technology can help tie regulations to internal processes. Structured data sets mean that it’s possible to connect the dots between policies/procedures and processes, systems, controls and products and services through structured content and ML tagging. A clear link to the broader risk-management framework, governance, and processes is necessary at all levels of the hierarchy, across both large and small companies. No longer is this something presented as a futuristic view at conferences and industry events, but a new reality which regtech is bringing to life. With the use of technology, a huge amount of data that offers significant insight into risk can be captured for evidencing and provided to regulators in a detailed structured format that is easy to understand. Needless to say, such a technology-driven holistic structured approach to data is fast becoming the only viable way to successfully manage policies and stay compliant in the current regulatory landscape.


How Insurers Can Tackle Cyber Threats in the Digital Age

Man holding umbrella walking into a laptop with lightning on the screen showing how insurers have a role in tackling cyber threats
Persistent knowledge gaps hinder the creation of effective cybersecurity cultures. Human error accounts for a significant share of cyber breaches, with phishing schemes alone is responsible for three quarters of malware hitting organizations globally, according to NTT Data. And while there’s broad recognition that cyber-attacks pose a major threat to organizations, there’s a stark divide between IT professionals and corporate leadership regarding the effectiveness of organizational protocols. In one survey, 59 percent of corporate board members said that their organizations’ cybersecurity governance practices were very effective, while only 18 percent of IT professionals agreed. Insurers can work with their clients to achieve a unified understanding of cybersecurity policies and terms. When all stakeholders operate according to a standardized cybersecurity framework, organizations can better manage risk, understand their vulnerabilities, respond to emerging threats and contain the fall-out of breaches.


30+ Powerful Artificial Intelligence Examples you Need to Know

Trading algorithms are already used successfully in world’s markets recognizing the staggering speed with which computer systems have transformed stock trading. Even though automation rules the trading world, the most complex algorithms use basic AI reasoning. Machine learning is poised to change the tradition by putting emphasis on making the decision more hard-data based and lesser grounded on trading theories. While humans will always play a role in regulation and for making the final decisions- more and more financial transactions are making their way to computer systems. Plus, given the competitive nature of this field, investment in AI and machine learning will be one of the most defining aspects of the field. Luckily, these technologies have the potential to stabilize and not disrupt, the financial industry- therefore resulting in better job stability (even reducing the probability of market crashes).


Why a Digital Mindset Is Key to Digital Transformation

sculpture of metal face
While infrastructure and technology are clearly important considerations, digital transformation is as much about the people and changing the way they approach business problems and where they look to find solutions. In fact, according to Gartner research analyst Aashish Gupta, many organizations forget to address the necessary cultural shift needed to change the mindset of workers, without which no digital transformation project is going to succeed. "The culture aspect and the technology demand equal attention from the application leader, because culture will form the backbone of all change initiatives for their digital business transformation. Staff trapped in a 'fixed' mindset may slow down or, worse, derail the digital business transformation initiatives of the company,” he said in a statement. To encourage a change in mindset from traditional to digital, Gartner has developed a four-step plan which it outlines in its report "Digital Business Requires a New Mindset, Not Just New Technology," due to be released soon.


The new third-party oversight framework: Trust but verify

There is a need to identify risk at different points in the third-party life cycle: at the commencement of the relationship, and on a regular basis thereafter, based on a number of factors that influence the risk the third party generates, such as privacy, regulatory compliance, business continuity planning, and information security. However, there also needs to be an early warning system that can alert management to a potential increase of risk outside of these scheduled assessments. This is where the link to risk appetite, key performance, and risk indicators comes into play. The risk oversight functions need to work together to build a set of factors that can assess the inherent risk associated with an activity plus any increase in risk associated with outsourcing the activity to a third party, the mitigating effect of existing measures employed by the institution and the third party to control that risk, and the determining of the remaining risk,or residual risk that the institution continues to bear.


What data dominance really means, and how countries can compete

A woman takes pictures with Nokia's new smartphone, the Lumia 1020 with a 41-megapixel camera, after its unveiling in New York July 11, 2013.    REUTERS/Shannon Stapleton   (UNITED STATES - Tags: BUSINESS TELECOMS SCIENCE TECHNOLOGY) - GM1E97C01ZA01
A lot of the current debate approaches data from the supply side, asking about ownership and privacy. These are no doubt important questions. But countries need to think deeply about the demand side: are they growing local industries that will make use of data? If not, they will find themselves forever exporting raw data and importing expensive digital services. People say data is like oil. But it isn’t, really. For one thing, data isn’t “fungible”: you can’t swap one piece of information for something else. Knowing my Amazon purchase history won’t help a self-driving car identify a stop sign. This is true even when data is the exact same type: my browsing history may not be as valuable as yours. This non-fungible nature shows up in my estimations of Facebook’s average monthly revenue-per-user, which shows that the average Canadian user generates 100 times more revenue than the average Ethiopian user. 


5 Ways Marketers Can Gain an Edge With Machine Learning

5 Ways Marketers Can Gain an Edge With Machine Learning
In the past -- and occasionally today -- these recommendations were manually curated by a human. For the past 10 years, they have often been driven by simple algorithms that display recommendations based on what other visitors have viewed or purchased. Machine learning can deliver substantial improvements over these simple algorithms. Machine learning can synthesize all the information you have available about a person, such as his past purchases, current web behavior, email interactions, location, industry, demographics, etc., to determine his interests and pick the best products or the most relevant content. Machine learning-driven recommendations learn which items or item attributes, styles, categories, price points, etc., are most relevant to each particular person based on his engagement with the recommendations -- so the algorithms keep improving over time. And machine learning-driven recommendations are not limited to products and content. You can recommend anything -- categories, brands, topics, authors, reviews vs. tech specs etc.



Why DevOps Fails: Some Key Reasons To Consider

It is important to know why culture is important. Culture is a set of practices, standards, beliefs, and structure that reinforce the organizational structure. DevOps is not only a set of tools; you must create a culture of DevOps in your organization to get the results you seek. A U.S. government agency that adopted DevOps for continuous deployment failed to identify the importance of people and process, which led to misconduct and confusion among developers and key people. ... every organization is a technology-driven organization regardless of the domains. The journey from digital transformation to continuous digital journey demanded flexibility, agility, and quality as most-focused aspects. DevOps has become a need for organizations that are associated with software delivery or often releasing an update or new features in order to serve their customers with quality and superiority. There is no doubt that DevOps can make software development faster, but every organization has a different set of requirements, and each company's DevOps adoption must be tailored to that set of requirements.



Quote for the day:


"Leadership is the other side of the coin of loneliness, and he who is a leader must always act alone. And acting alone, accept everything alone." -- Ferdinand Marcos


Daily Tech Digest - March 30, 2019

As memory prices plummet, PCIe is poised to overtake SATA for SSDs

As memory prices plummet, PCIe is poised to overtake SATA for SSDs
PCIe is several times faster and has much more parallelism, so throughput is more suited to the NAND format. It comes in two physical formats: an add-in card that plugs into a PCIe slot and M.2, which is about the size of a stick of gum and sits on the motherboard. PCIe is most widely used in servers, while M.2 is in consumer devices. There used to be a significant price difference between PCIe and SATA drives with the same capacity, but they have come into parity thanks to Moore’s Law, said Jim Handy, principal analyst with Objective Analysis, who follows the memory market. “The controller used to be a big part of the price of an SSD. But complexity has not grown with transistor count. It can have a lot of transistors, and it doesn’t cost more. SATA got more complicated, but PCIe has not. PCIe is very close to the same price as SATA, and [the controller] was the only thing that justified the price diff between the two,” he said. DigiTimes estimates that the price drop for NAND flash chips will cause global shipments of SSDs to surge 20 to 25 percent in 2019


Edge computing is real. It's here, and companies have to have a strategy to handle the enormous influx of data coming in real time from devices globally. Analysts project there will be 50 billion telematics devices by 2020 and forecast the sum of the world's data will reach 175 zettabytes by 2025. Although edge computing is putting enormous pressure on IT infrastructure -- where legacy systems at the networking, storage, and application layers are straining today -- a new generation of systems is coming to market to help companies deal with the data explosion caused by edge computing. What is most exciting is the ability these new systems give companies to engage with customers in fundamentally new ways. There are examples of new business models being developed around the edge -- Netflix, Uber, and Amazon are notable examples -- but now many companies can adopt these new business models with next-generation, edge-aware systems emerging today.


The second-biggest improvement that Microsoft has made in HoloLens 2 is that the gesture control has been revamped. If I am to be completely honest, I have never had the best luck with getting HoloLens gestures to work. I always assumed that I was doing something wrong, because nobody else that I have talked to seems to have any trouble. From what I have heard about HoloLens 2, a new artificial intelligence (AI) processor and something called a time-of-flight depth sensor will collectively make it so that HoloLens will allow you to interact with holographic objects in the same way that you would interact with their real-world counterparts. This might mean being able to pick up a hologram and move it as if it were a physical object, as opposed to having to resort to using the convoluted gestures that are currently required. It remains to be seen how this new capability will actually be implemented, but I have high hopes that using HoloLens 2 will be far more intuitive than using its predecessor.


How to eliminate the security risk of redundant data
Most enterprises migrate their data to the public cloud in that second way: they just cart it all from the data center to the cloud. Often, there is no single source of truth in the on-premises databases, so all the data is moved to the public cloud keeps all its redundancies. Although it’s an architectural no-no, the reality is that most systems are built in silos, which is where the redundancies come from. They often create their own version of common enterprise data, such as customer data, order data, and invoice data. As a result, most enterprises have several security vulnerabilities that they have inadvertently moved to the cloud. ... The best solution to this problem is to not maintain redundant data. I’m sure the CRM system has APIs to allow for secure access to customer data that can be integrated directly into the inventory system. Or, the other way around. The goal is to maintain data in a single physical location, even if accessed by multiple systems. Even if you do eliminate most of the redundant data, all your data should be secured under a holistic security system that’s consistent from application to application and from database to database.


Vulnerability management woes continue, but there is hope
Let data analytics be your guide. In other words, take all your vulnerability scanning data and analyze it across a multitude of parameters, including asset value, known exploits, exploitability, threat actors, CVSS score, similar vulnerability history, etc. This data analysis can be used to calculate risk scores, and these risk scores can help guide organization on which vulnerabilities should be patched immediately, which ones require compensating controls until they can be patched, which ones can be patched on a scheduled basis, and which ones can be ignored.  Of course, few organizations will have the resources or data science skills to put together the right vulnerability management algorithms on their own, but vendors such as Kenna Security, RiskSense, and Tenable Networks are all over this space. Furthermore, SOAR vendors such as Demisto, Phantom, Resilient, ServiceNow, and Swimlane are working with customers on runbooks to better manage the operational processes.  


7 tips for stress testing a disaster recovery plan

A disaster recovery plan is a bit like an insurance policy: we all agree we need it and we all hope we’ll never use it. And as with insurance, nobody wants to discover their DR plan doesn’t actually protect them when a disaster hits. Similarly, nobody wants to find out that their DR plan is overdone – meaning they’ve been spending too much time, money and energy maintaining it. But if you don’t regularly stress test your DR plan, you could find yourself in one of these situations. I’ve worked with a lot of businesses, and I’ve noticed that few conduct regular stress tests of their DR plans. That’s a problem: no disaster recovery plan is good enough to magically transform as a business changes – and realistically, no business remains static. At a previous firm, we tested quarterly and found changes and updates during every test! So how can you verify that your DR plan fits your current needs? Follow these seven steps.


woman with hands over face mistake oops embarrassed shy by marisa9 getty
Cisco rates both those router vulnerabilities as “High” and describes the problems like this:  One vulnerability is due to improper validation of user-supplied input. An attacker could exploit this vulnerability by sending malicious HTTP POST requests to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary commands on the underlying Linux shell as root; and the second exposure is due to improper access controls for URLs. An attacker could exploit this vulnerability by connecting to an affected device via HTTP or HTTPS and requesting specific URLs. A successful exploit could allow the attacker to download the router configuration or detailed diagnostic information. Cisco said firmware updates that address these vulnerabilities are not available and no workarounds exist, but is working on a complete fix for both. On the IOS front, the company said six of the vulnerabilities affect both Cisco IOS Software and Cisco IOS XE Software, one of the vulnerabilities affects just Cisco IOS software and ten of the vulnerabilities affect just Cisco IOS XE software.


VS Code Python Type Checker Is Microsoft 'Side Project'

Deemed a work in progress with no official support from Microsoft and much functionality yet to be implemented, the GitHub-based project is described as an attempt to improve on currently available Python type checkers, with mypy mentioned specifically. Of course, the increasingly popular Visual Studio Code editor already sports an increasingly popular Microsoft-backed, jack-of-all-trades Python extension (just updated) that boasts more than 35 million downloads and 7.3 million installations and does type checking and a whole lot more. But Pyright isn't aiming to compete with that tool, rather to just improve on its type-checking capabilities, which are powered by the Microsoft Python Language Server that uses the language server protocol to provide IntelliSense and other advanced functionality for different programming languages in code editors and IDEs. "Pyright provides overlapping functionality but includes some unique features such as more configurability, command-line execution, and better performance," the GitHub project says.


Tapping security cameras for better algorithm training

surveillance camera (Sensay/Shutterstock.com)
For computer vision and facial recognition systems to work reliably, they need training datasets that approximate real-world conditions. So far, researchers have had access to only a small number of image datasets, many of which are heavily populated with still pictures of fair-skinned men. This limitation impacts the accuracy of the technology when it comes across types of images it's not familiar with – those of women or people of color, for instance. Another challenge is related to the varying quality of the images on video feeds available from surveillance cameras. Often the cameras' scope and angle, as well as the lighting or weather during a given recording, make it difficult for law enforcement to track or re-identify people from security camera footage as they try to reconstruct crimes, protect critical infrastructure and secure special events. To help solve this problem, the Intelligence Advanced Research Projects Activity has issued a request for information regarding video data that will help improve computer vision research in multicamera networks.


Huawei Security Shortcomings Cited by British Intelligence

Huawei Security Shortcomings Cited by British Intelligence
The latest findings are contained in the fifth annual report to be issued by the NCSC's Huawei Cyber Security Evaluation Center, which the U.K. government launched in 2010 to review Huawei's business strategies and test all product ranges before they were potentially used in any setting that might have national security repercussions. The new report emphasizes that the findings should not imply that U.K. telecommunications networks are at any greater risk now than they were before. Rather, the findings are part of a high-level review to ensure that Britain's telecommunications networks remain as secure as possible. "We can and have been managing the security risk and have set out the improvements we expect the company to make. We will not compromise on the progress we need to see: sustained evidence of better software engineering and cybersecurity, verified by HCSEC," the NCSC spokeswoman says. "This report illustrates above all the need for improved cybersecurity in the U.K. telco networks, which is being addressed more widely by the digital secretary's review."



Quote for the day:



"Prosperity isn't found by avoiding problems, it's found by solving them." -- Tim Fargo