Daily Tech Digest - April 21, 2019

Blockchain: The Ultimate Disruptor?


What the internet did for the exchange of information, blockchain has the potential to do for the exchange of a digital asset’s value. Right now, many people in the blockchain space are talking about “tokenization,” which breaks down the ownership of an asset into digital tokens to allow wider-scale ownership of that asset. Tokenization started with initial coin offerings (ICOs) and has evolved to securitized token offerings (STOs), which have the potential to unlock the value of trillions of dollars of assets that are currently closed to the average person and make them more accessible. We’re talking about real-estate holdings, private equity, etc. When these assets are tokenized and brought into the market, it could impact the average person and how they do their financial and retirement planning, as well as where and what they choose to invest in. ... Rafia says about the wider access potential, “Currently, private equity, venture capital and other similar investments are not available to retail investors because there are a lot of regulations preventing it, as they tend to be riskier asset classes


Key in changing your enterprise is analyzing the impact of changes and planning those changes in a smart way. We do not advocate a ‘big up-front design’ approach, with huge, rigid multi-year transformation plans. Rather, in an increasingly volatile business world you need to use an interactive approach where your plans are updated regularly to match changing circumstances, typically in an agile manner. The figure below shows a simple example of dependencies between a series of changes, depicted with the pink boxes. A delay in ‘P428’ causes problems in the schedule, since ‘P472’ depends on it. Moreover, since the two changes overlap in scope (shown in the right-hand table), they could potentially be in each other’s way when they also overlap in time. This information is calculated from the combination of project schedule and architecture information, a clear example of the value of integrating this kind of structure and data in a Digital Twin.


How People Are the Real Key to Digital Transformation

An interview with Gerald C. Kane, Anh Nguyen Phillips, Jonathan R. Copulsky, and Garth R. Andrus, the authors of "The Technology Fallacy."
Digital disruption affects all levels of the organization. Our research shows, however, that higher level leaders are generally much more optimistic about how their organization is adapting to that disruption than lower level employees. This result suggests that leaders may be overestimating how well their organization is responding. In the book, we provide a framework by which leaders can survey their employees to gauge how digitally mature their organization is against 23 traits, which we refer to as the organization’s digital DNA. Digital maturity is usually unevenly distributed throughout an organization, and we encourage organizations to use this framework to assess how it is distributed so they can begin to identify and address the areas of improvement that are most likely to yield organizational benefits. ... a single set of organizational characteristics were essential for digital maturity -- accepting risk of failure as a natural part of experimenting with new initiatives, actively implementing initiatives to increase agility, valuing and encouraging experimentation and testing as a means of continuous organizational learning, recognizing and rewarding collaboration across teams and divisions, increasingly organizing around cross-functional project teams, and empowering those teams to act autonomously.


Cachalot DB as a Distributed Cache with Unique Features

The most frequent use-case for a distributed cache is to store objects identified by one or more unique keys. A database contains the persistent data and, when an object is accessed, we first try to get it from the cache and, if not available, load it from the database. Most usually, if the object is loaded from the database, it is also stored in the cache for later use. ... By using this simple algorithm, the cache is progressively filled with data and its “hit ratio” improves over time. This cache usage is usually associated with an “eviction policy” to avoid excessive memory consumption. When a threshold is reached (either in terms of memory usage or object count), some of the objects from the cache are removed. The most frequently used eviction policy is “Least Recently Used” abbreviated LRU. In this case, every time an object is accessed in the cache, its associated timestamp is updated. When eviction is triggered, we remove the objects with the oldest timestamp. Using cachalot as a distributed cache of this type is very easy.


Enterprise Architecture: A Blueprint for Digital Transformation


Enterprise architects have a tough job. They have to think strategically but act tactically. A successful enterprise architect can sit down at the boardroom table and discuss where the business needs to go, then translate “business speak” into technical capabilities on the back end. The key to EA is to always focus on business needs first, then how those needs can be met by applying technology. It comes down to the concept of IQ (intelligence quotient or “raw intelligence”) and EQ (emotional quotient, or “emotional intelligence”). As a recent Forbes article stated, “when it comes to success in business, EQ eats IQ for breakfast.” Good enterprise architects need to have good IQ and EQ. This balance prevents pursuing the latest technology just because it’s cool but instead determining what’s the best way to meet the business need. At the end of the day, an EA should be measured by the business outcomes it’s delivering. Our approach to EA (see below) starts with a business outcome statement, and ends with governance processes to verify we’re achieving those business outcomes and adhering to the EA blueprint.


Crisis Resilience for the Board


Similar to culture oversight, boards are increasingly monitoring company technology activities, from cyber risk to disruption risk to digital transformation. Directors are asking management tough questions about technologies that are vital to the business and whether they are truly protected from the most likely and impactful risks. Beyond protecting data, the board should understand whether management is incorporating resilience into their information technology and cybersecurity strategies. To do so, directors may seek to understand how the most critical data—or that which is most vital to the business’s success—is backed up and protected, both physically and logically. Directors should understand, at a high level, what the most critical data asset sets or capabilities are to the company and the risks posed to them. Additionally, directors should ask management whether it is considering innovative technologies to both protect assets and enable quick recovery in the event of potential loss. ... Directors might also endeavor to learn about leading practices around risk management, crisis management, cyber risk, physical security, succession planning, and culture risk. This could provide a level of comfort with the risks posed to the company, as well as a degree of confidence in the company’s ability to respond.


The Cybersecurity 202: This is the biggest problem with cybersecurity research


“There are a whole lot of possible barriers that will come to the fore if an organization asks their lawyers about it,” Moore said. “It turns out that many of those risks, on deeper inspection, can be mitigated and overcome. But there has to be institutional will to do it.” One irony of this problem is that the cybersecurity community has been hyper-focused on information sharing in recent years — but the focus has been on companies sharing hacking threats from the past day or two so they can guard against them. The government has championed these threat-sharing operations and facilitates them through a set of organizations called information sharing and analysis centers and information sharing and analysis organizations. That sort of sharing has a clear benefit for companies because it helps them defend against threats that may be coming in the next hour or day. But companies have made less progress on sharing longer-range cybersecurity information that can help address more fundamental cybersecurity challenges, Moore said.


The Connection Between Strategy And Enterprise Architecture (Part 3)

Business capabilities connect the business model with the enterprise architecture, which is composed of the organizational structure, processes, and resources that execute the business model. It is a combination of resources, processes, values, technology solutions, and other assets that are used to implement the strategy. ... business capabilities comprise a fundamental building block that enables and supports the business transformation initiatives companies are undertaking to remain relevant in the constantly changing marketplace. Companies that excel in mapping their existing capabilities and creating a road map to close the gap in their future capabilities are most likely to remain ahead of the competition by responding effectively to industry and market dynamics. Therefore, the way we connect the company’s high-level strategic priorities and objectives to the resources, processes, and ultimately the system landscape that execute the strategy is by mapping and modeling the necessary capabilities.


Leading Innovation = Managing Uncertainty

McKinsey Quarterly (2019) - Three Horizons Framework
Uncertainty is the central characteristic of innovation. While generating new ideas and inventing new technologies is important, it is even more important for innovators to identify the unknowns that have to be true for their ideas and technologies to succeed in the market. We can only claim to have succeeded at innovation when we find the right business model to profitably take our idea or technology to market. At the strategy level, there are several frameworks that have been developed to help leaders understand their product and service portfolios and make decisions. These frameworks use different dimensions that hide in plain sight, the real challenge that leaders are facing - i.e. managing uncertainty. ... The McKinsey framework is perhaps the most popular of them all. This framework maps two dimensions of value and time to create three horizons. The nearest horizon is Horizon 1 where we extend the core and generate value for the company straight away. In Horizon 2, we build businesses around new opportunities with potential to impact revenues in the near term. The farthest horizon is Horizon 3 where visionaries work on viable options that will only deliver value to the company after several years.


Cloud Security Architectures: Lifting the Fog from the Cloud

The user behavior analytics (UBA) security solutions oriented primarily to the insider threat have matured and are commonplace. ... This data is crucial for forensics analyses when a major breach has been detected. There isn’t a comparable mature technology yet for the cloud where users have migrated their work. The successful UBA technology teaches security professionals how to properly architect new enterprise systems with users’ cloud behaviors in full view. The core approach for the cloud is to gather data from cloud storage logs, extract features and carefully architect sets of indicators that detect likely breaches. Cloud logs can reveal, for example, when file extensions are changed, what documents are downloaded and to where, whether a document has been downloaded to an anonymous user, and when an unusually high number of documents are downloaded to an odd geolocation. These are all early indicators of potential breach activity.



Quote for the day:


"Leaders need to be optimists. Their vision is beyond the present." -- Rudy Giuliani


Daily Tech Digest - April 20, 2019

How to reconstruct your business’s value chain for the digital world

How to reconstruct your business̢۪s value chain for the digital world
What’s the big advantage of digital? It allows you to disconnect yourself from physical constraints. With uber, you no longer have to be in the street to hail a cab. You can order a cab from anywhere. If you digitize the supply-chain process, you are no longer linking the production of the product to one physical location. In the analog world, a person would check the inventory and write an order for supplies. When there was a spike in demand, that person would call more people and write more orders for more supplies. But in the digital world, you can create a manufacturing process where your inventory, recipes, and prices are all available on a digitized, harmonized ecosystem. When demand spikes, you can turn the dial on your [robotic process automation] RPA tool. When we digitize and harmonize complex business processes, we no longer have to call a guy who orders a part. Instead, you have a view into the inventory across multiple suppliers. The CIO has a unique and critical role in digital transformation, as long as they don’t fall into a few common traps. One such trap is when the CEO throws money at you and tells you, “Bring me this shiny new technology.”


Why Enterprise-Grade Cybersecurity Needs a Federated Architecture

A federated architecture combines the strengths of centralized and distributed and is, therefore, a kind of “best of both worlds” approach. With federated, a controller is placed in each data center or public cloud region (just like distributed), but those multiple controllers act in concert so as to provide the abstraction that there is one centralized controller. All of the controllers in a federated architecture communicate with each other to share information about the organization’s security policy as well as the workloads that are being secured. This type of architecture is the best when it comes to securing global infrastructure at scale. And, as is typically the case when writing enterprise-grade software, making the right architectural choice and then implementing it in an elegant way required our architects and engineers to spend a little more time and be a little more thoughtful. Our ultimate goal was to deliver an enterprise-scale architecture that delivered the benefits of a federated architecture without the downsides of distributed and centralized.


Ready for 6G? How AI will shape the network of the future


Take the problem of coordinating self-driving vehicles through a major city. That’s a significant challenge, given that some 2.7 million vehicles enter a city like New York every day. The self-driving vehicles of the future will need to be aware of their location, their environment and how it is changing, and other road users such as cyclists, pedestrians, and other self-driving vehicles. They will need to negotiate passage through junctions and optimize their route in a way that minimizes journey times. That’s a significant computational challenge. It will require cars to rapidly create on-the-fly networks, for example, as they approach a specific junction—and then abandon them almost instantly. At the same time, they will be part of broader networks calculating routes and journey times and so on. “Interactions will therefore be necessary in vast amounts, to solve large distributed problems where massive connectivity, large data volumes and ultra low-latency beyond those to be offered by 5G networks will be essential,” say Stoica and Abreu.


IT Governance 101: IT Governance for Dummies, Part 2

One of the powerful aspects of COBIT is that it acts as the glue between governance and management, describing both governance and management processes. Its concept of cascading enterprise goals to IT goals to enabler goals and metrics ensures consistent communication and alignment. These enablers such as Processes are where all the IT management frameworks can be plugged in, helping to give the frameworks a business context and ensuring that they focus on delivering value and outcomes, not just outputs. As stated by one expert in the UAE, “I think often because organizations do not do a goals cascade things feel disconnected and orphaned, but once you do a proper goals cascade you can see and feel the interconnection and how goals are interdependent on each other to achieve the enterprise-level goals. ... Clearly, these exploding business demands for new benefits exist and, at the same time, IT is expected to make everything secure, replace all that legacy stuff that is slowing down the Ubering, and stop IT from breaking as well.


Some internet outages predicted for the coming month as '768k Day' approaches

World map globe cyber internet
The good news is that network admins have known about 768k Day for a long time, and many have already prepared, either by replacing old routers with new gear or by making firmware tweaks to allow devices to handle global BGP routing tables that exceed even 768,000 routes. "Yes, TCAM memory settings can be adjusted to help mitigate, and even go beyond 768k routes on some platforms, which will work if you don't run IPv6. These setting changes require a reboot to take effect," Troutman said. "The 768k IPv4 route limit is only a problem if you are taking ALL routes. If you discard or don't accept /24 routes, that eliminates half the total BGP table size. "The organizations that are running older equipment should know this already, and have the configurations in place to limit installed prefixes. It is not difficult," Troutman added. "I have a telco ILEC client that is still running their network quite nicely on old Cisco 6509 SUP-720 gear, and I am familiar with others, too," he said.


Bots Are Coming! Approaches for Testing Conversational Interfaces

When testing such interfaces, natural language is the input and we humans really love having alternatives and love our synonyms and our expressions. Testing in this context moves from pure logic to something close to fuzzy logic and clouds of probabilities. As they are intended to provide a natural interaction, testing conversational interfaces also requires a great deal of empathy and understanding of the human society and ways of interacting. In this area, I would include cultural aspects, including paraverbal aspects of speech (that is all communication happening beside the spoken message, encoded in voice modulation and level). These elements provide an additional level of complexity and many times the person doing the testing work needs to consider such aspects. I believe it’s fair to say that testing a conversational interface can be also be seen as tuning, so that it passes a Turing test. Another challenge faced when testing such interfaces is the distributed architecture of systems.


Protecting smart cities and smart people

spinning globe smart city iot skyscrapers city scape internet digital transformation
For as long as most can remember, information security was a technology concern, handled by technologists, and discussed by security engineers and associated professionals. The security vendors presented at security conferences, the security professional attended accordingly, Cat people with cat people. You know how it goes. Within a Smart city eco- system, we need to extend the cyber conversation beyond the traditional players. How do we make the City Planner appreciate what we understand? How do we share and apply security best practices to an engineering company providing a Building Information Modelling (BIM) service to a Hospital or Defence project? Moreover, how do we, in the first instance highlight the security concerns? Attending and speaking at numerous cyber conferences I sometimes wonder, is this the right audience? In this digital eco-system, we should be speaking to civic and government leaders about our security concerns facing smart cities and critical infrastructure, not exclusively to other security professionals. They are well aware of the challenges and the resistance experienced.


Don't underestimate the power of the fintech revolution

According to Bank of England Governor Mark Carney, FinTech’s potential is to unbundle banking into its core functions - such as settling payments and allocating capital. For central bankers and regulators who are monitoring the sector, the growth of fintech is akin to any other disruptive technology - that is, will it lead to financial instability? Most fintech start-ups are not regulated as much as traditional financial institutions. So far, it’s the more open financial markets that have seen fintech develop rapidly. One example is the e-payment system M-Pesa, which operates in Kenya, Tanzania and elsewhere, and is one of the biggest fintech success stories since its emergence just a decade ago. By effectively transforming mobile phones into payment accounts, M-Pesa has increased financial access for previously unbanked people. The permissive stance of the Kenyan central bank allowed the sector to develop rapidly in one of East Africa’s most developed economies.


Data Breaches in Healthcare Affect More Than Patient Data

Data Breaches in Healthcare Affect More Than Patient Data
Cybercriminals go after any data they perceive to be valuable, says Rebecca Herold, president of Simbus, a privacy and cloud security services firm, and CEO of The Privacy Professor consultancy. "Payroll data contains a wide range of really valuable data that cybercrooks can sell to other crooks for high amounts," she says. "With the growing number of pathways into healthcare systems and networks ... that are being established through employee-owned devices, through third parties/BAs, and through IoT devices, I believe that such fraud is increasing because of the many more opportunities that crooks have now to commit these types of crimes." The recent attacks on Blue Cross of Idaho and Palmetto Health spotlight the importance for healthcare entities to diligently safeguard all data, says former healthcare CISO Mark Johnson of the consultancy LBMC Information Security. The attacks "underscore for me that the healthcare industry needs to protect the entire environment, not just their large systems like the EMR," he says.


Why Your DevOps Is Not Effective: Common Conflicts in the Team

In the DNA of DevOps culture lies the principle of constant and continuous interaction as well as collaboration between different people and departments. The key reason for this is a much greater final efficiency and a much smaller time-to-market compared to the traditional approach. Proper implementation of DevOps shifts the focus from personal effectiveness to team efficiency. At the same time, due to automation and the widespread introduction of monitoring and testing, it is possible to track the occurrence of a problem at the early stages, as well as quickly find the causes of problems. Building the right culture in the organization is important, and it does not depend on DevOps directly: problems occur in all companies, but in an organization, with the right culture all the forces will be thrown at solving the problem and preventing it in the future, rather than searching for the guilty side and punishing.



Quote for the day:


"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor


Daily Tech Digest - April 18, 2019

Automation is a machine and a machine only does what it is told to do. Complicated tests require a lot of preparation and planning and also have certain boundaries. The script then follows the protocol and tests the application accordingly, Ad-hoc testing helps testers to answer questions like, “What happens when I follow X instead of Y?” It helps the tester to think and test using an out of the box approach, which is difficult to program in an automation script. Even visual cross-browser testing needs a manual approach. Instead of depending on an automated script to find out the visual differences, you can check for the issues manually either by testing on real browsers and devices or even better, by using cloud-based, cross-browser testing tools, which allow you to test your website seamlessly across thousands of different browser-device-operating system combinations. ... Having a manual touch throughout the testing procedure instead of depending entirely on automation will ensure that there are no false positives or false negatives as test results after a script is executed.


Understanding the key role of ethics in artificial intelligence

It has become faddish to talk about the important of ethical AI and the need for oversight, transparency, guidelines, diversity, etc., at an abstract and high-level. This is not a bad thing, but often assumes that such ‘talk’ is concomitant in addressing the challenges of ethical AI. The facts, however, are much more complex. For example, guidelines themselves are often ineffective (a recent study showed the ACM’s code of ethics had little effect on the decision making process of engineers). Moreover, even if we agree on how an AI system should behave (not trivial) implementing specific behavior in the context of the complex machinery that underpins AI is extremely challenging. ... Ethics in AI is extremely important given the proliferation of AI systems in consequential areas of our lives: college admissions, financial decision-making systems, and what the news we consume on Facebook and other media sites.


Researchers: Malware Can Be Hidden in Medical Images
The "flaw" discovered in the DICOM file format specification could allow attackers to embed executable code within DICOM files to create a hybrid file that is both a fully functioning Windows executable as well as a specification-compliant DICOM image that can be opened and viewed with any DICOM viewer, the report says. "Such files can function as a typical Windows PE file while maintaining adherence to the DICOM standard and preserving the integrity of the patient information contained within," according to the report. "We've dubbed such files, which intertwine executable malware with patient information, PE/DICOM files." By exploiting this design flaw, the report says, attackers could "take advantage of the abundance and centralization of DICOM imagery within healthcare organizations to increase stealth and more easily distribute their malware, setting the stage for potential evasion techniques and multistage attacks." The fusion of fully functioning executable malware with HIPAA protected patient information adds regulatory complexities and clinical implications to automated malware protection and typical incident response processes, the researchers say.


Sometimes, rather than look at problem areas in the business, he says the team focuses on exploring pure technology. As an example, Chatrain says Generative Adversarial Networks (GANs) can benefit from algorithms that generate fake data, such as fake pictures of people who do not actually exist. “We dedicate part of our exploratory time to such techniques and technologies and then look for applications,” he says. Looking at a practical example of how a fake data algorithm could be deployed, he says: “With GDPR and the need to feed test systems with high volumes of realistic data, we used [synthetic data algorithms] to create fake travellers with travel itineraries.” Such synthetic data is indistinguishable from the data that represents the travel plans of real people, and this data can be used to test the robustness of systems at Amadeus. “Today, no one tests the systems if we have twice as much data,” says Chatrain. But this is possible if data for a vast increase in passenger numbers is simply generated via a synthetic data algorithm. Beyond being used to test application software, he says synthetic data also enables Amadeus to anonymise the data it shares with third parties. “We are not allowed to share [personal] data, but we still need a business partnership.”


What is project portfolio management? Aligning projects to business goals

What is project portfolio management? Aligning projects to business goals
With PPM, not only are project, program, and portfolio professionals able to execute at a detailed level, but they are also able to understand and visualize how project, program, and portfolio management ties to an organization’s vision and mission. PPM fosters big-picture thinking by linking each project milestone and task back to the broader goals of the organization. ... Capacity planning and effectively managing resources is largely dependent on how well your PMO executes its strategy and links the use of resources to company-wide goals. It is no secret that wasted resources is one of the biggest issues that companies encounter when it comes to scope creep. PPM decreases the chances of wasted resources by ensuring resources are allocated based on priority and are being effectively sequenced and wisely leveraged to meet intended goals. ... PMOs that communicate to project teams and other stakeholders, such as employees, why and how project tasks are vital in creating value increase the likelihood of a higher degree of productivity. 


Startup MemVerge combines DRAM and Optane into massive memory pool
Optane memory is designed to sit between high-speed memory and solid-state drives (SSDs) and acts as a cache for the SSD, since it has speed comparable to DRAM but SSD persistence. With Intel’s new Xeon Scalable processors, this can make up to 4.5TB of memory available to a processor. Optane runs in one of two modes: Memory Mode and App Direct Mode. In Memory Mode, the Optane memory functions like regular memory and is not persistent. In App Direct Mode, it functions as the SSD cache but apps don’t natively support it. They need to be tweaked to function properly in Optane memory. As it was explained to me, apps aren’t designed for persistent storage because the data is already in memory on powerup rather than having to load it from storage. So, the app has to know memory doesn’t go away and that it does not need to shuffle data back and forth between storage and memory. Therefore, apps natively don’t work in persistent memory.


crypto currency circuit nodes digital wallet bitcoin blockchain
The group hopes to turn out the first iteration of its Token Taxonomy Framework (TTF) later this year; afterward it plans work to educate the blockchain community and collaborate through structured Token Definition Workshops (TDW) to define new or existing tokens. Once defined, the taxonomy can be used by businesses as a baseline to create blockchain-based applications using digital representations of everything from supply chain goods to non-fungible items such as invoices. "We'll do some workshops...to validate and make sure we have the base definition of a non-fungable token," said Marley Gray, Microsoft's principal architect for Azure blockchain engineering and a member of the EEA's Board of Directors. "As we go through workshops, we will probably find we should add this attribute or this clarification or this example that helps someone understand it." The organizations that have agreed to participate in the standardization effort include Accenture, Banco Santander, Blockchain Research Institute, BNY Mellon, Clearmatics, ConsenSys, Digital Asset, EY, IBM, ING, Intel, J.P. Morgan, Komgo, R3, and Web3 Labs.



Each micro-component runs an independent processing flow that performs a single task. For example, if your application has a network layer, you may also have Network Receiver and Network Sender components which only have the responsibility for receiving/sending data through the network. If your application has a logging layer it might also be implemented as an independent micro-component. Each micro-component defines its own interface of outgoing/incoming events, and the internal processing flow for them. For example, the Network Receiver might define the OutgoingClientRequests channel, which would be populated with newly received requests from the users. Interfaces, as you might guess, are implemented on top of channels, so the communication flows look very obvious, predictable, and easily maintainable in this perspective. The core’s role is to connect various outgoing channels with various incoming channels and to enable data flow between various micro-components.


Cisco Talos details exceptionally dangerous DNS hijacking attack

man in boat surrounded by sharks risk fear decision attack threat by peshkova getty
Talos noted “with high confidence” that these operations are distinctly different and independent from the operations performed by DNSpionage. In that report, Talos said a DNSpionage campaign utilized two fake, malicious websites containing job postings that were used to compromise targets via malicious Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers. In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated Let's Encrypt certificates for the redirected domains. These certificates provide X.509 certificates for Transport Layer Security (TLS) free of charge to the user, Talos said. The Sea Turtle campaign gained initial access either by exploiting known vulnerabilities or by sending spear-phishing emails. Talos said it believes the attackers have exploited multiple known common vulnerabilities and exposures (CVEs) to either gain initial access or to move laterally within an affected organization.


Wipro Detects Phishing Attack: Investigation in Progress

Wipro Detects Phishing Attack:  Investigation in Progress
Wipro's systems were seen being used as jumping-off points for digital phishing expeditions targeting at least a dozen Wipro customer systems, the blog says. "Wipro's customers traced malicious and suspicious network reconnaissance activity back to partner systems that were communicating directly with Wipro's network," according to the blog. In a statement, Wipro says: "Upon learning of the incident, we promptly began an investigation, identified the affected users and took remedial steps to contain and mitigate any potential impact." The firm tells ISMG that none of its customers' credentials have been affected, as was alleged in the blog. Some security experts, however, say Wipro may be the victim of a nation-state sponsored attack. "It is most likely by a nation-state. They use this modus operandi to breach a vendor network first and through that route the attack their customers," says a Bangalore-based security expert, who did not wish to be named. "That is because customers will consider Wipro's network safe.



Quote for the day:


"A good leader leads the people from above them. A great leader leads the people from within them." -- M.D. Arnold