Daily Tech Digest - June 20, 2019

Researchers say 6G will stream human brain-caliber AI to wireless devices


The most relatable one would enable wireless devices to remotely transfer quantities of computational data comparable to a human brain in real time. As the researchers explain it, “terahertz frequencies will likely be the first wireless spectrum that can provide the real time computations needed for wireless remoting of human cognition.” Put another way, a wireless drone with limited on-board computing could be remotely guided by a server-sized AI as capable as a top human pilot, or a building could be assembled by machinery directed by computers far from the construction site. Some of that might sound familiar, as similar remote control concepts are already in the works for 5G — but with human operators. The key with 6G is that all this computational heavy lifting would be done by human-class artificial intelligence, pushing vast amounts of observational and response data back and forth. By 2036, the researchers note, Moore’s law suggests that a computer with human brain-class computational power will be purchasable by end users for $1,000, the cost of a premium smartphone today; 6G would enable earlier access to this class of computer from anywhere.



Serverless Computing from the Inside Out

Fundamentally, cybersecurity isn't about threats and vulnerabilities. It's about business risk. The interesting thing about business risk is that it sits at the core of the organization. It is the risk that results from company operations — whether that risk be legal, regulatory, competitive, or operational. This is why the outside-in approach to cybersecurity has been less than successful: Risk lives at the core of the organization, but cybersecurity strategy and spending has been dictated by factors outside of the organization with little, if any, business risk context. This is why we see organizations devoting too many resources to defend against threats that really aren't major business risks, and too few to those that are. To break the cycle of outside-in futility, security organizations need to change their approach, so they align with other enterprise risk management functions. And that approach is to turn outside-in on its head, and take an inside-out approach to cybersecurity. Inside-out security is not based on the external threat landscape; it's based on an enterprise risk model that defines and prioritizes the relative business risk presented by organizations' digital operations and initiatives. 


Post-Hadoop Data and Analytics Head to the Cloud

Image: 4x-image - iStockphoto
Gartner analyst Adam Ronthal said that while there are some native Hadoop options available in public clouds like AWS, they may not be the best solution for many applications. "There's a fair bit of complexity that goes into managing a Hadoop cluster," he told InformationWeek. Non-Hadoop-based cloud solutions may look simpler and easier to organizations that are evaluating data and analytics solutions. But that doesn't mean there's not a place for Hadoop in the future. Ronthal said that Hadoop is experiencing a "market correction" rather than an existential crisis. There are use cases that Hadoop is really good at, he said. But a few years back, Hadoop was the rock star technology that was the solution to every problem. "The promises out there 3, 4, or 5 years ago were that Hadoop was going to change the world and redefine how we did data management," he said. "That statement overpromised and underdelivered. What we are really seeing now is recognition of workloads that Hadoop is really good at, like the data science exploration workloads."


Artificial intelligence could revolutionize medical care. But don’t trust it to read your x-ray just yet


The algorithms learn as scientists feed them hundreds or thousands of images—of mammograms, for example—training the technology to recognize patterns faster and more accurately than a human could. “If I’m doing an MRI of a moving heart, I can have the computer predict where the heart’s going to be in the next fraction of a second and get a better picture instead of a blurry” one, says Krishna Kandarpa, a cardiovascular and interventional radiologist at the National Institute of Biomedical Imaging and Bioengineering in Bethesda, Maryland. Or AI might analyze computerized tomography heads scans of suspected strokes, label those more likely to harbor a brain bleed, and put them on top of the pile for the radiologist to examine. An algorithm could help spot breast tumors in mammograms that a radiologist’s eyes risk missing. But Eric Oermann, a neurosurgeon at Mount Sinai Hospital in New York City, has explored one downside of the algorithms: The signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.


Cybersecurity Risk Assessment – Made Easy 

Cyber Risk Assessment
Cybersecurity Risk Assessment is critical because cyber risks are part and parcel of any technology-oriented business. Factors such as lax cybersecurity policies and technological solutions that have vulnerabilities expose an organization to security risks. Failing to manage such risks provides cybercriminals with opportunities for launching massive cyberattacks. But fortunately a cybersecurity risk assessment allows a business to detect existing risks. A cybersecurity risk assessment also facilitates risk analysis and evaluation to identify vulnerabilities with higher damage potential. As a result, a business can identify suitable controls for addressing the risks. ... Cybersecurity risk assessments have many other benefits, all aimed at bolstering organizational security. Cybersecurity risk assessments are critical for any company to harden its cybersecurity. Most importantly, they are the method for a company to identify the most suitable security controls needed to achieve an optimum cybersecurity approach.


Cybersecurity Accountability Spread Thin in the C-Suite

"CEOs are no longer looking at cyber-risk as a separate topic. More and more they have it embedded into their overall change programs and are beginning to make strategic decisions with cyber-risk in mind," says Tony Buffomante, global co-leader of cybersecurity services at KPMG. "It is no longer viewed as a standalone solution."  That sounds good at the surface level, but other recently surfaced statistics offer grounding counterbalance. A global survey of C-suite executives released last week by Nominet indicates these top executives have some serious gaps in knowledge about cybersecurity, with around 71% admitting they don't know enough about the main threats their organizations face. This corroborates with a survey of CISOs conducted earlier this year by the firm that indicates security knowledge and expertise possessed by the board and C-levels is still dangerously low. Approximately 70% of security executives agree that at least one cybersecurity specialist should be on the board in order for it to take appropriate levels of due diligence in considering the issues.


Industrial IoT as Practical Digital Transformation

Industrial IoT as Practical Digital Transformation
To navigate this journey in the face of both uncertainty and hype, their company leaders chose a measured approach of “practical” digital transformation. To begin, they adopted IoT through an iterative process of incremental value testing. Notably, they selected goals for increasing internal effectiveness rather than fixating on new customer offerings. As a result, usage data from equipment inside customer facilities now empowers a more cost-effective services team and reduces truck rolls. Furthermore, understanding how their machines are operated in the field enables product teams to proactively identify problem areas and continuously improve their equipment offerings. Both use cases are internal rather than directly customer-facing. Yet it’s their customers who ultimately benefit from higher operational productivity enabled by these ever-smarter systems. Moving forward, machine utilization numbers will better prepare sales teams for guiding customers toward systems best matching their true capacity needs, as well as inform warranty management issues. Connected systems create opportunities for exceeding customer expectations at every turn.


Why the Cloud Data Diaspora Forces Businesses to Rethink their Analytics Strategies

The single biggest thing is it allows you to scale and manage workloads at a much finer grain of detail through auto-scaling capabilities provided by orchestration environments such as Kubernetes. More importantly it allows you to manage your costs. One of the biggest advantages of a microservice-based architecture is that you can scale up and scale down to a much finer grain. For most on-premises, server-based, monolith architectures, customers have to buy infrastructure for peak levels of workload. We can scale up and scale down those workloads -- basically on the fly -- and give them a lot more control over their infrastructure budget. It allows them to meet the needs of their customers when they need it. ... A lot of Internet of Things (IoT) implementations are geared toward collecting data at the sensor, transferring it to a central location to be processed, and then analyzing it all there. What we want to do is push the analytics problem out to the edge so that the analytic data feeds can be processed at the edge. 


Brexit GDPR and the flow of data: there could be one winner and that’s the cyber criminal

Brexit GDPR and the flow of data: there could be one winner and that̢۪s the cyber criminal image
Huseyin advocates technology. It may not come as a shock to learn he advocated a product from nsKnox. He refers to Cooperative Cyber Security which allows data to be shared across organisations and networks in a way that is completely cryptographic and shredded. “If you can take information with identifiers and put it into a form which is actually meaningless and shred it cryptographically and then distribute it to the partners of the data consortium who want to be able to access that information, you’re now pushing data around the world potentially without ever exposing the actual underlying information. “So for example, we could take your name and we can shred it and we can distribute it to let’s say two banks in Europe and two banks in the UK. Each of those banks holds a piece of information and collectively that information makes up your name, but individually those pieces of information are just bits of encrypted binary data. So totally meaningless.“ So that’s two potential solutions, get close to the regulator and apply appropriate technology.


How to Use Open Source Prometheus to Monitor Applications at Scale

Using Prometheus, we looked to monitor “generic” application metrics, including the throughput (TPS) and response times of the Kafka load generator (Kafka producer), Kafka consumer, and Cassandra client (which detects anomalies). Additionally, we wanted to monitor some application-specific metrics, including the number of rows returned for each Cassandra read, and the number of anomalies detected. We also needed to monitor hardware metrics such as CPU for each of the AWS EC2 instances the application runs on, and to centralize monitoring by adding Kafka and Cassandra metrics there as well.  To accomplish this, we began by creating a simple test pipeline with three methods (producer, consumer, and detector). We then used a counter metric named “prometheusTest_requests_total” measured how many times each stage of the pipeline executes successfully, and a label called “stage” to tell the different stage counts apart (using “total” for the total pipeline count). We then used a second counter named “prometheusTest_anomalies_total” to count detected anomalies.



Quote for the day:


"Good things come to people who wait, but better things come to those who go out and get them." -- Anonymous


Daily Tech Digest - June 19, 2019

RPA use cases that take RPA to next level


"Start with a small piece of a larger process and take on more," he said. "Then, look upstream and downstream, and ask, 'How do we take that small use case and expand the scope of what the bot is automating? Can we take on more steps in the process, or can we initiate the automation earlier in the process to grow?'" At the same time, Abel said CIOs should create an RPA center of excellence and develop the tech talent needed to take on bigger RPA use cases. He agreed that a strong RPA governance program to ensure the bots are monitored and address any change control procedures is crucial. It's also essential to maintain a strong data governance program, he said, as the bots need good data to operate accurately. Additionally, Abel said he advises CIOs to work with other enterprise executives to develop RPA use cases that align to business objectives so that RPA deployments have long-term value. Abel pointed to one client's experience as a cautionary tale. He said that company jumped right into RPA, deploying bots to automate various tasks. 


Libra Association members Facebook blockchain
The underlying blockchain transactional network will be able to handle thousands of transactions per second; data about the financial transactions will be kept separate from data about the social network, according to David Marcus, the former president of PayPal. He is now leading Facebook's new digital wallet division, Calibra. Aside from limited cases, Calibra will not share account information or financial data with Facebook or any third party without customer consent, the social network said in a statement. "This means Calibra customers' account information and financial data will not be used to improve ad targeting on the Facebook family of products," Facebook said. Calibra and its underlying blockchain distributed ledger will scale to meet the demands of "billions," Marcus said in an interview with Fox Business News this morning. Libra is different from other cryptocurrencies, such as bitcoin, in that it is backed by fiat currency, so its value is not simply determined by supply and demand. Bitcoin is "not a good medium of exchange today because [fiat] currency is actually very stable and bitcoin is volatile," Marcus said in the Fox Business News interview.


Western Digital launches open-source zettabyte storage initiative

big data / data center / server racks / storage / binary code / analytics
With this project Western Digital is targeting cloud and hyperscale providers and anyone building a large data center who has to manage a large amount of data, according to Eddie Ramirez, senior director of product marketing for Western Digital. Western Digital is changing how data is written and stored from the traditional random 4K block writes to large blocks of sequential data, like Big Data workloads and video streams, which are rapidly growing in size and use in the digital age. “We are now looking at a one-size-fits-all architecture that leaves a lot of TCO [total cost of ownership] benefits on the table if you design for a single architecture,” Ramirez said. “We are looking at workloads that don’t rely on small block randomization of data but large block sequential write in nature.” Because drives use 4k write blocks, that leads to over-provisioning of storage, especially around SSDs. This is true of consumer and enterprise SSDs alike. My 1TB SSD drive has only 930GB available. And that loss scales. An 8TB SSD has only 6.4TB available, according to Ramirez.



'Extreme But Plausible' Cyberthreats

A new report from Accenture highlights five key areas where cyberthreats in the financial services sector will evolve. Many of these threats could comingle, making them even more disruptive, says Valerie Abend, a managing director at Accenture who's one of the authors of the report. The report, "Future Cyber Theats: Extreme But Plausible Threat Scenarios in Financial Services," focuses on credential and identity theft; data theft and manipulation; destructive and disruptive malware; cyberattackers' use of emerging technologies, such as blockchain, cryptocurrency and artificial intelligence; and disinformation campaigns. In an interview with Information Security Media Group, Abend offers an example of how attackers could comingle threats. If attackers were to wage "a multistaged attack using credential theft against multiple parties that then used disruptive or destructive malware, so that they actually change the information at key points in the business process of critical financial functions ... and then used misinformation outside of that entity using various parts of social media ... they could really do some serious damage," Abend says.


How to prepare for and navigate a technology disaster

DRP, Disaster Recovery Plan
Two key developments will have the largest impact on business continuity and disaster recovery planning. The first is serverless architecture. Using this term very loosely, the adoption of these capabilities will dramatically increase application and data portability and enable workloads to be executed virtually anywhere. We're quite a bit of a way from this being the default way you build applications, but it's coming, and it's coming fast. The second is edge computing. As modern applications and business intelligence are moved to the edge, the ability to 'fail over' to additional resources will increase, minimizing (if not eliminating) real and perceived downtime. The more identical places you can run your application, the better the level of availability and performance is going to be. This definitely isn't simple, but we're seeing (and developing) applications each and every day that are built with this architecture in mind, and it's game changing for enterprise and application architecture and planning.


Q&A on the Book Risk-First Software Development

The Risk Landscape is the idea that whenever we do something to deal with one risk, what’s actually happening is that we’re going to pick up other risks as a result. For example, hiring new developers into a team might mean you can clear more Feature Risks (by building features the customers need), but it also means you’re going to pick up Coordination and Agency Risk, because of your bigger team. So, you’re moving about on a Risk Landscape, hoping to find a nice position where the risks are better for you. This first volume of Risk-First Software Development was all about that landscape, and the types of risks you’ll find on it. I am planning a second volume, which again will all be available to read on riskfirst.org. This will focus more on the tools and techniques you can use to navigate the Risk Landscape.  For example, if I have a distributed team, I might face a lot of Coordination Risk, where work is duplicated, or people step on each other’s toes. What are the techniques I can use to address that? I could introduce a chat tool like Slack, but it might end up wasting developer time and causing more Schedule Risk. 


Microservices Chassis Pattern
This is not something new. Reusability is something we learn in at the very beginning of our developer lives. This pattern cuts down on the redundancy factor and complexity across services by abstracting the common logic to a separate layer. If you have a very generic chassis, it could even be used across platforms or organizations and wouldn't need to be limited to a specific project. It depends on how you write it and what piece of logic you move to this framework. Chassis are a part your microservices infrastructure layer. You can move all sorts of connectivity, configuration, and monitoring to a base framework. ... When you start writing a new service by identifying a domain (DDD) or by identifying the functionality, you might end up writing lots of common code. As you progress and create more and more services, it could result in code duplication, or even chaos, to manage such common concerns and redundant functionalities. Moving such logic to a common place and reusing it across different services would improve the overall lifecycle of your service. You might spend some initial effort in creating this component but it will make your life easier later on.


Remember data is used for dealing with your customers, making decisions, generating reports, understanding revenue and expenditures. Everyone from the customer service team to your senior executive team use data and rely on it being good enough to use. Data governance provides the foundation so that everything else can work. This will include obvious “data” activities like master data management, business intelligence, big data analytics, machine learning and artificial intelligence. But don’t get stuck thinking only in terms of data. Lots of processes in your organization can go wrong if the data is wrong, leading to customer complaints, damaged stock, and halted production lines. Don’t limit your thinking to only data activities. If your organization is using data (and to be honest, which companies aren’t?) you need data governance. Some people may not believe that data governance is sexy, but it is important for everyone. It need not be an overly complex burden that adds controls and obstacles to getting things done. Data governance should be a practical thing, designed to proactively manage the data that is important to your organization.



A well-managed cloud storage service ties directly into the apps you use to create and edit business documents, unlocking a host of collaboration scenarios for employees in your organization and giving you robust version tracking as a side benefit. Any member of your organization can, for example, create a document (or a spreadsheet or presentation) using their office PC, and then review comments and changes from co-workers using a phone or tablet. A cloud-based file storage service also allows you to share files securely, using custom links or email, and it gives you as administrator the power to prevent people in your organization from sharing your company's secrets without permission. With the assistance of sync clients for every major desktop and mobile platform, employees have access to key work files anytime, anywhere, on any device. You might already have access to full-strength cloud collaboration features without even knowing it. If you use Microsoft's Office 365 or Google's G Suite, cloud storage isn't a separate product, it's a feature.


Boost QA velocity with incremental integration testing

There are several strategies for incremental integration testing, including bottom-up, top-down and a hybrid approach blending elements of both, as well as automation. Each method has benefits and limitations, and gets incorporated into an overall test strategy in different ways. These incremental approaches help enable shift-left testing, which means automation shapes how teams can perform the practice. ... Often, the best approach is hybrid, or sandwich, integration testing, which combines both top-down and bottom-up techniques. Hybrid integration testing exploits bottom-up and top-down during the same test cycles. Testers use both drivers and stubs in this scenario. The hybrid approach is multilayered, testing at least three levels of code at the same time. Hybrid integration testing offers the advantages of both approaches, all in support of shift left. Some of the disadvantages remain, especially as the test team must work on both drivers and stubs.



Quote for the day:


"What you do makes a difference, and you have to decide what kind of difference you want to make." -- Jane Goodall


Daily Tech Digest - June 18, 2019

7 Promises of Machine Learning in Finance


Machine Learning is the new black, or the new oil, or the new gold! Whatever you compare Machine Learning to, it’s probably true from a conceptual value perspective. But what about its relation to finance, what situation we are having today? Banks keep everything: a history of transactions, conversations with clients, internal information and more… Storages are literally bloated to tera-, and somewhere to a petabyte. Now, then, Big Data can solve this problem and process huge amounts of information: the greater the amount, the greater the detectable needs and behaviors of the client. Artificial Intelligence paired with Machine Learning allows the software learning the clients’ behavior and make autonomous decisions. ... It goes without saying, Machine learning is just remarkably good for finance and the promises of this technology including Big Data and Artificial Intelligence is super high. As you can see, there are lots of options, approaches, and applications for improving: choosing an optimal location for a bank, finding the best solutions for a customer, turning algorithmic trading into intelligent trading, managing risk and preventing fraud ...


What Makes A Software Programmer a Professional?

Professional software developers understand that they generally have more knowledge of software development than the customers that have hired them to write code. Thus they understand that writing secure code, code that can’t be easily abused by hackers, is their responsibility. A software developer creating web applications probably needs to address more security risks than a developer writing embedded drivers for an IOT device, but each needs to assess the different ways the software is vulnerable to abuse and take steps to eliminate or mitigate those risks. Although it may be impossible to guarantee that any software is immune to an attack, professional developers will take the time to learn and understand the vulnerabilities that could exist in their software, and then take the subsequent steps to reduce the risk that their software is vulnerable. Protecting your software from security risks usually includes both static analysis tools and processes to reduce the introduction of security errors, but it primarily relies upon educating those writing the code.


When serverless is a bad idea
Indeed, many refer to serverless as “no-ops,” but it’s really “reduced-ops,” or as my friend Mike Kavis likes to say, “some-ops.” Clearly the objective is to increase simplicity and make building and deploying net-new cloud-based serverless applications much more productive and agile. But serverless is not always a good idea. Indeed, it seems to be a forced fit a good deal of the time, causing more error than trial. Serverless is an especially bad idea when it comes to stateful applications. A stateless application means that every transaction is performed as if it were being done for the very first time. There is no previously stored information used for the current transaction. In contrast, a stateful application saves client data from the activities of one session for use in another. The data that is saved is often called the application’s state. Stateful applications are a bad fit for serverless.  Why? Serverless applications are made up of sets of services (such as functions) that are short running and stateless. Consider them transactions in the traditional sense, in that they are invoked and they don’t maintain anything after being executed.



Two Weekend Outages, Neither a Cyberattack

The power outages in Argentina occurred just prior to local elections being held in many parts of the country. Suspicious, right? But not as suspicious as what experts say is the country's aging power infrastructure. Indeed, Argentina's Clarín newspaper reports that the Argentinian government has blamed the "gigantic" outage on the country's electric power interconnection systems collapsing due to coastal storms. Officials also noted that the rolling blackouts had taken out not just Argentina, but large parts of Uruguay. Apparently, that's also what took out parts of Paraguay and Chile. ... Target suffered twin outages, neither of which trace to a hack attack. On Saturday, customers were unable to buy any goods in stores or online as a result of an outage caused by what Target blamed on "an internal technology issue" that lasted about two hours. "Our technology team worked quickly to identify and fix the issue, and we apologize for the inconvenience and frustration this caused for our guests," a Target spokesman said. 


AI storage: Machine learning, deep learning and storage needs


The storage and I/O requirements of AI are not the same throughout its lifecycle. Conventional AI systems need training, and during that phase they will be more I/O-intensive, which is where they can make use of flash and NVMe. The “inference” stage will rely more on compute resources, however. Deep learning systems, with their ability to retrain themselves as they work, need constant access to data. “When some organisations talk about storage for machine learning/deep learning, they often just mean the training of models, which requires very high bandwidth to keep the GPUs busy,” says Doug O'Flaherty, a director at IBM Storage. “However, the real productivity gains for a data science team are in managing the entire AI data pipeline from ingest to inference.” The outputs of an AI program, for their part, are often small enough that they are no issue for modern enterprise IT systems. This suggests that AI systems need tiers of storage and, in that respect, they are not dissimilar to traditional business analytics or even enterprise resource planning (ERP) and database systems.



Can Your Patching Strategy Keep Up with the Demands of Open Source?

An alarming number of companies aren't applying patches in a timely fashion (for both proprietary and open source software), opening themselves to risk. The reasons for not patching are varied: Some organizations are overwhelmed by the endless stream of available patches and are unable to prioritize what needs to be patched, some lack the trained resources to apply patches, and some need to balance risk with the financial costs of addressing that risk. Unpatched software vulnerabilities are one of the biggest cyberthreats that organizations face, and unpatched open source components in software add to security risk. The 2019 OSSRA report notes that 60% of the codebases audited in 2018 contained at least one open source vulnerability. In 2018, the NVD added over 16,500 new vulnerabilities. This represents a rate of over 45 new disclosures daily, or a pace most organizations are ill equipped to handle. Given open source components are consumed both in source form as well as from commercial applications, a comprehensive open source governance strategy should encompass both source code usage as well as the governance practices for any software or service provider.


3 rules for succeeding with AI and IoT at the edge


The primary value of combining AI, IoT and edge computing is their ability to generate fast, appropriate responses to events signaled by IoT sensors. Virtual and augmented reality applications demand this kind of response, as do enterprise applications in process control and the movement of goods. The cooperation inherent in manufacturing, warehousing, sales and delivery will likely create the sweet spot for an IoT-enabled AI edge. Such activities form a chain of movement of goods that cross many different companies and demand coordination that a single-company IoT model could not provide. ... Think event-flows, not workflows in your application planning. Most enterprise development practices were weaned on transaction processing, and transactions are multistep, contextual, update-centric forms of work. Their pace of generation can be predicted fairly well, and when a transaction is initiated, the flow of information it triggers is usually predictable. Events are simply signals of conditions or changes in conditions.


Building a cyber-physical immune system

To build a credible model of its own behaviour, the system must not just learn its digital behaviour, but also capture the behaviour of its physical subsystems. One way to achieve this is to represent the behaviour in terms of physical laws. For example, moving parts of a system will obey the laws of motion; parts of a heating subsystem will obey the laws of thermodynamics; and electrical installations will obey current and voltage laws. In theory, it is possible to sense relevant physical quantities, apply the correct physical laws and then detect departures from expected behaviour. These deviations suggest that the system might be functioning abnormally, because of to its own wear and tear, spontaneous failure, or concerted malicious activity. Anomaly detection, in principle, operates in this manner, but it has been applied rather narrowly to specific subsystems. ... to build a cyber-physical immune system, it is necessary to engage with experts who work on its non-cyber aspects.


Many businesses are investing in microservices, for example, to enable faster, more efficient application development. But whereas in traditional models, applications are deployed to application servers, in a microservices-based architecture, servers are deployed to the application. One consequence is that tasks previously handled by the application server—such as authentication, authorization, and session management—are shifted to each microservice. If a business has thousands of such microservices powering their applications across multiple clouds, how can its IT leaders even begin to think of a perimeter? ... Historically, many enterprises applied management and security to only a subset of APIs—e.g., those shared with internal partners and hosted behind the corporate firewall (within a walled garden, for example). But because network perimeters no longer contain all the experiences that drive business, enterprises should think of each API as a possible point of business leverage and a possible point of vulnerability. To adapt to today’s application development demands and threat environment, in other words, APIs should be managed and secured, regardless of where they are located.


Love It or Hate It, Java Continues to Evolve

What’s really important is that Java is continuing to evolve. With the new six-month release cadence of the OpenJDK, it might seem like the pace of change has slowed but, if anything, the reverse is true. We are seeing a constant stream of new features, many of them quite small, yet making developers lives much easier. To add big new features to Java takes time because it’s essential to get these things right. We will see in JDK 13 a change to the switch expression, which was introduced as a preview feature in JDK 12. Rather than setting the syntax in stone (via the Java SE specification), preview features allow developers to try a feature and provide feedback before it is finalized. That’s precisely what happened in this case. The longer-term Project Amber will continue to make well-reasoned changes to the language syntax to smooth some of the rough-edges that developers find trying at times. You can expect to see more parts of Amber delivered over the next few releases.



Quote for the day:


"You must expect great things of yourself before you can do them." -- Michael Jordan


Daily Tech Digest - June 17, 2019

300+ Terrifying Cybercrime and Cybersecurity Statistics & Trends


With global cybercrime damages predicted to cost up to $6 trillion annually by 2021, not getting caught in the landslide is a matter of taking in the right information and acting on it quickly. We collected and organized over 100 up-to-date cybercrime statistics that highlight: The magnitude of cybercrime operations and impact; The attack tactics bad actors used most frequently in the past year
How user behavior is changing and how it… isn’t; What cybersecurity professionals are doing to counteract these threats; How different countries fare in terms of fighting off blackhat hackers and other nation states; and  What can be done to keep data and assets safe from scams and attacks. Dig into these surprising (and sometimes mind-boggling) internet security statistics to understand what’s going on globally and discover how several countries fare in protecting themselves. The article includes a handy infographic you can browse to see how each stat is connected to the others, and plenty of visual representations of the most important facts and figures in information security today.


How Blockchain And AI Can Help Master Data Management

uncaptioned
Ensuring data security is vital, not only for ethical purposes but also for compliance with regulatory bodies. And no conversation about security and privacy, in this day and age, is complete without the mention of blockchain. Blockchain, which is often considered to be synonymous with privacy, can be used to secure sensitive information that makes up master data. This includes any personal information, such as that pertaining to customers and employees. It can also refer to accounting and banking-related information that may be necessary for processes like procurement and sales. All such information can be secured using blockchain through cryptographic hashing. Businesses can internally build enterprise blockchain networks to secure and manage master data using a decentralized model. It not only secures the information from illicit modification, but also from accidental loss due to physical damage to centralized servers. Additionally, it also helps in compliance with privacy regulations in an easily demonstrable manner. This is because data on a blockchain, in addition to being immutable, is also transparent and visible to all participants, ensuring smoother audits and checks.


Developing a Functional Data Governance Framework


Harvard Business Review reports 92 percent of executives say their Big Data and AI investments are accelerating, and 88 percent talk about a greater urgency to invest in Big Data and AI. In order for AI and machine learning to be successful, Data Governance must also be a success. Data Governance remains elusive to the 87 percent of businesses which, according to Gartner, have lower levels of Business Intelligence. Recent news has also suggested a need to improve Data Governance processes. Data breaches continue to affect customers and the impacts are quite broad, as an organization’s customers (including banks, universities, and pharmaceutical companies) must continually take stock and change their user names and passwords. Effective Data Governance is a fundamental component of data security processes. Data Governance has to drive improvements in business outcomes. “Implementing Data Governance poorly, with little connection or impact on business operations will just waste resources,” says Anthony Algmin, Principal at Algmin Data Leadership. To mature, Data Governance needs to be business-led and a continuous process, as Donna Burbank and Nigel Turner emphasize.


Survey: Data-center staffing shortage remains challenging

help wanted data center network room it shortage now hiring by yinyang getty
Contributing to the staffing crisis is a lack of workplace diversity. In particular, the Uptime Institute’s research highlights a significant gender imbalance: 25 percent of managers surveyed have no women among their design, build or operations staff, and another 54 percent have 10 percent or fewer women on staff. Only 5 percent of respondents said women represent 50 percent or more of staff. Yet most respondents don’t seem to think there’s anything deterring women from working where they work. A majority (85 percent) said it’s easy for women to pursue a career in their respective organization’s data center team or department; just 15 percent said it’s difficult. Referring to the data-center industry as a whole, respondents were less confident about women’s employment prospects: 53 percent said it’s easy for women to pursue a career in data centers, and 47 percent said it’s difficult. In the big picture, diversity issues could become a threat to business operations. “Study after study shows that a lack of diversity is not just a pipeline issue,” Ascierto said. 


How banks can use ecosystems to win in the SME market

How banks can use ecosystems to win in the SME market
In parallel to designing the prototype, banks need to think through IT implications at the outset. The design choices will significantly affect the speed of development and the potential reach of the new solution. A design based on integration with an existing banking app might command a larger audience than a new stand-alone application—yet the latter typically offers more flexibility. The choice of a platform should be wedded to the monetization approach (see the “Think early about monetization” section). If the bank wants to retain the option of spinning off an ecosystem platform in the future, or listing it separately, its IT should not be enmeshed with the bank’s legacy systems. Nor can it be completely divorced: efficient transfer of information between the two systems is needed to maximize value for both banking and nonbanking offerings. IT is a key driver of costs and of the ecosystem design and business model. For instance, a Western European bank decided to integrate its ecosystem solution with its mobile banking platform.


The New Addition to the Dell EMC Ready Solutions for AI Portfolio


The Deep Learning with Intel solution joins the growing portfolio of Dell EMC Ready Solutions for AI and was unveiled today at International Super Computing in Frankfurt. This integrated hardware and software solution is powered by Dell EMC PowerEdge servers, Dell EMC PowerSwitch networking, and scale-out Isilon NAS storage and leverages the newest AI capabilities of Intel’s 2nd Generation Intel® Xeon® Scalable processor microarchitecture, Nauta open source software and includes enterprise support. The solution empowers organizations to deliver on the combined needs of their data science and IT teams and leverages deep learning to fuel their competitiveness. Dell Technologies Consulting Services help customers implement and operationalize Ready Solution technologies and AI libraries, and scale their data engineering and data science capabilities. Once deployed, ProSupport experts provide comprehensive hardware and collaborative software support to help ensure optimal system performance and minimize downtime. 


5G in the UK — overhyped or has the next era of connectivity really begun?

5G in the UK — overhyped or has the next era of connectivity really begun? image
The availability of 5G is dependent on local operators (EE, O2, Vodafone etcetera) — businesses are relying on them to build it out and drive on the capabilities. These businesses will need connectivity across multiple networks and so while an operator race is developing, it’s important that every network is competitive. Unfortunately, this is not a priority for heated competitors. Over the last two months, operators have revealed their capabilities, but they’re very focused on their own networks. It’s unlikely, they will have even started to talk about how to make that available to other partners or asked how to support sharing across partners within the network, which, as is the case with other technology changes, typically has a second phase. “Typically, the first phase is to build out and scale up within their own networks; and then the next phase is asking how do you do interoperability and interworking between the networks,” said Sherwood. “That’s even further away from scale. ... ”


Could AI Enable The Idea Of 'Reverse Fact Checking'?

Getty Images.
Fact checking today is a reactive process in which journalists wait for a falsehood to begin spreading virally and then publish their final verdict long after the falsehood’s spread has tapered off and the damage done. Much of this delay stems from the amount of time and research it takes for fact checkers to investigate a claim and determine its veracity. What if we inverted this process and required every social media post to provide external attribution for its claims and used deep learning algorithms to compare the statements in the post to the original material it cites as its source? Could this “reverse fact checking” largely curb the spread of digital falsehoods? The greatest limitation of today’s fact checking landscape is the time and effort it takes fact checkers to investigate a claim. Collecting evidence, reaching out to organizations and experts for commentary and summarizing the resulting information into a final verdict is an extremely time-consuming process that offers few opportunities for efficient scaling.


Identity and access management –– mitigating password-related cyber security risks

Identity and access management –– mitigating password-related cyber security risks image
The death of the password has been heralded since the Hewlett Packard, in the mid 1990s, introduced biometric fingerprint scanning into laptops. But, it is still pervasive. Biometrics have become more common in personal devices and mobile devices, it’s true. But, there are still a range of applications out there that are hugely dependent on passwords as their primary method of authentication. In any enterprise or small business, there’s still a heavy reliance on passwords and often businesses don’t even know the extent to which applications are being used in the business. IT might know about the common apps that are used in that organisation, but they may have no visibility of these applications that some departments have adopted autonomously. To give an example, My1Login worked with one smaller organisation who thought it had about 40 applications in use across the business. When they switched My1Login’s solution on, the technology discovered there were actually 600 corporate applications being used. All of these are now integrated fully with a single sign-on.


Five Android and iOS UI Design Guidelines for React Native


In a multiplatform approach, the designer is bound by the guidelines for each platform. This approach is more useful if your application has a complex UI and your main goal is to attract users who are more likely to spend their time on their favorite platform, be it iOS or Android. Going by the above example of a search bar, an Android user is more likely to be comfortable with the look and feel of the standard search bar of an Android app. This contrasts with an iPhone user, who will not be comfortable with the standard Android search bar. So, in a multiplatform approach, you strive to give each user the kind of look and feel they are used to. Let’s have a look at a more realistic example in order to have a clearer picture of what the multi-platform approach entails: Airbnb. As you can see in the image below, the versions of the Airbnb app for iOS and Android look entirely different and the reason for that is they follow design guidelines which are totally platform-specific.



Quote for the day:



"Blessed are the people whose leaders can look destiny in the eye without flinching but also without attempting to play God "- Henry Kissinger


Daily Tech Digest - June 16, 2019

While We Wait For Artificial Superintelligence, Let's Make The Most Of Augmented Intelligence

uncaptioned
Augmented intelligence has displayed unmatched potential in multiple industry sectors such as healthcare, retail, finance, manufacturing and many more. Just about every organization is already deploying or planning to use augmented intelligence for various applications. The ability of augmented intelligence to improve human capabilities has proved to be fruitful in the workplace. With the help of augmented intelligence, employee performance, productivity, and experience can grow at a staggering rate. Organizations must exploit augmented intelligence to its maximum potential to gain the best possible results and maintain a competitive edge. Augmented intelligence can effectively improve workplace productivity by automating various tasks. Routine and admin tasks require a workforce and consume a significant chunk of employee time. Such tasks can be easily automated with the help of augmented intelligence. Augmented intelligence has given rise to advanced solutions such as Robotic Process Automation, (RPA), for various industry sectors.


Data Governance: From Risk Management to Business Value

When data governance was just oriented around compliance, the scope of data and the governance requirements were controlled and prescriptive. This narrow focus made it possible to use manual processes for governance and stewardship activities. In the new world of business value-based data governance the sheer scale of data, and the collaboration required across all organizational functions makes automation critical to success. We now have data lakes with petabytes of data, being updated in real time with streaming sensor data, social data, and mobile location data. There are tens of thousands of users accessing the data across finance, sales, marketing, service, procurement, research and development, manufacturing, logistics, and distribution. It’s at least a thousand-fold increase in scale and complexity. At this scale the only way you will keep up is with AI-powered automation.


The Danger of Bias in an Al Tech Based Society


The intelligence of AI systems is learned from humans. By nature, humans are biased. We will usually want our national team to win against a rival. We will always be rooting for our own family members to succeed. Even though we may not realize it, deep in our subconscious lies bias.  Algorithmic bias occurs when the AI system acts in a way that reflects the implicit values of the humans who were involved in the data collection, selection, and programming. Despite the presumed neutrality of the data, algorithms are open to bias. Algorithmic bias often goes undetected. Bias is hidden in the depths of the mathematical programming of AI tech and means that important decisions go unchecked. This could have serious negative consequences for poorer communities and minority groups. ... If algorithms could accurately predict which defendants are likely to re-offend, the system could be made more selective about sentencing and more just. However, this would mean that the algorithms would have to be devoid of any type of bias to avoid exacerbating unwarranted and unjust disparities that are already far too common in the criminal justice system.



Addressing the Top Three Risk Trends for Banks in 2019

Most of the cybersecurity risk for banks comes from application security. The more banks rely on technology, the greater the chance they face of a security breach. Adding to this, hackers continue to refine their techniques and skills, so banks need to continually update and improve their cybersecurity skills. This expectation falls to the bank board, but the way boards oversee cybersecurity continues to vary: Twenty-seven percent opt for a risk committee; 25 percent, a technology committee and 19 percent, the audit committee. Only 8 percent of respondents reported their board has a board-level cybersecurity committee; 20 percent address cybersecurity as a full board rather than delegating it to a committee. Utilizing technological tools to meet compliance standards—known as regtech—was another prevalent theme in this year’s survey. This is a big stress area for banks due to continually changing requirements. The previous report indicated that survey respondents saw increased expenses around regtech.


Capturing value in machinery and industrial automation as market dynamics change

Capturing value in machinery and industrial automation as market dynamics change
Currently, most established players—OEMs, automation-device suppliers, and machine-control suppliers—are working on strategies to cope with shifting growth patterns and the resulting mix of unexpected high demand and declining growth in more mature technologies. At the same time, these players are preparing themselves to be best positioned to claim a share of the additional value expected to be created by digital manufacturing solutions, which we estimate will double to €32 billion worldwide by 2025. The disruptive trend of digitization also attracts new players to participate in the market, especially in the space of software, platforms, and application providers. This diversification challenges the foothold that established players have enjoyed on strategic control points, for example, the machine-control layer in the automation technology stack. While the strategic cornerstones are often obvious and similar across players—for instance, securing core business, capturing additional value from digitization, and increasing internal efficiency—the exact chances of success of individual strategic measures and the threat from competition remains uncertain.



Microsoft’s Ann Johnson: ‘Identity is the new perimeter’

Identity is the new perimeter and we identify identity as the human, the device, the data, the application – and all of these have a unique identity and all of these need to be updated, hashed and healthy. In the context of ML, we take all of those variables and put them in the ML engine and assign risk based on where the user is, what they are trying to access, how they authenticate and what device they are on. What we find with bad actors is that we are not seeing yet, in any meaningful way, production of malware that adapts in the wild that you would expect, but potentially in the future. We are not seeing yet any meaningful corruption with AI models or putting malicious data into ML engines to try to train it incorrectly. I do expect that there will be attack vectors and we are doing a tremendous amount of work with Microsoft Research to make sure we build those defences. But the good news is that we are not seeing it in any meaningful or wholesale way today, and that’s why I don’t think it is quite a race.


Managing the multicloud will require lots of AI – but people too

Cloud Mobile Phone Typing Smartphone Phone Finger
The more complex your multicloud becomes, the less likely it is that you’ll be able to entirely automate responses the vast range of underlying platform, application, service and other issues. Human-in-the-loop exception handling will become the order of the day for the long tail of rare cloud-computing use cases up and down this multilayered management plane. The more complex cloud management functions — including cost management, security and compliance, application development, deployment and operational management — will continue to rely on collaborative responses that skilled human IT personnel may need to improvise on the fly. The orchestration layer in the more complex cloud deployment use cases will need to drive human-response flows alongside entirely system-automated responses. The less common a specific incident or situation is, the less likely it is that there will be sufficient historical “ground truth” data for training the highly predictive statistical models upon which AI-driven automations depend. In many multicloud operational circumstances, AI-driven workflows will often span several tiers of IT support resources working in lockstep over indefinite periods.


Microsoft Edge Reddit AMA: Edge might come to Linux

Microsoft Edge
The biggest tease the company dropped was its apparent willingness to release an Edge version for Linux -- a move that was once considered inconceivable. "We don't have any technical blockers to keep us from creating Linux binaries, and it's definitely something we'd like to do down the road," the Edge team said. "That being said, there is still work to make them 'customer ready' (installer, updaters, user sync, bug fixes, etc.) and something we are proud to give to you, so we aren't quite ready to commit to the work just yet. "Right now, we are super focused on bringing stable versions of Edge first to other versions of Windows, and then releasing our Beta channels," Edge devs said. While the Chromium codebase on which the upcoming Edge version supports Linux builds, users were afraid that when Microsoft ripped out various Chromium features last year, it might have impacted Edge's ability to support cross-platform builds. However, today's comment comes to confirm a tweet published in April on the personal Twitter account of one of Edge's developers.


For better healthcare claims management, think “digital first”

For better healthcare claims management, think “digital first”
In the long-term vision, digital solutions would cover all steps within claims management. Because the process would be fully digital, very little human intervention would be needed. In this scenario, claims would be transferred in real time from a provider to a cloud solution containing all electronic health documents. Once a claim is transferred to the cloud, self-learning algorithms would automatically access it and perform real-time auditing using technical reference points, such as the claimant’s insurance status and benefits package, as well as medical reference points. Once robust self-learning algorithms have been established and trained using both existing data and expert knowledge, their efficiency will continue to improve over time. Ultimately, it would become possible to automate payers’ communications with providers and customers. For example, if further information was required to reach a decision about a specific claim, providers would be contacted automatically via a digital request form that would include an integrated first check for basic information.



Foundations Of Business Architecture


The work of creating and defining a business architecture is not meant as an academic exercise. A business architecture is based on the organization’s business strategy. The business architecture positions the organization to operate efficiently in pursuit of its goals. As defined, a business venture is about creating value. Value is demonstrated in the form of corporate profits or in returns to owners and shareholders. Corporate goals tend to be high-level and wide. Organizations use various processes and methods for capturing and documenting the corporate goals. The method used in capturing the corporate goals is less important than having the discipline, structure, and communication methods to support the creation and dissemination of the corporate goals across the entire organization. Used most effectively, corporate goals are developed within the context of a larger enterprise wide strategic planning function. Often, the process is used in creating the organization’s data strategy, which may occur during enterprise architecture planning.



Quote for the day:


"The most valuable thing you can make is a mistake - you can't learn anything from being perfect." -- Adam Osborne


Daily Tech Digest - June 15, 2019

This is likely the No. 1 thing affecting your job performance


To improve your expertise, you must first identify gaps in your knowledge. You aren’t likely to be motivated to learn new things–nor can you be strategic about learning–if you’re not aware of what you do and don’t know. Without a good map of the existing state of your knowledge, you’ll bump into crucial new knowledge only by chance. ... The ability to know what you know and what you don’t know is called metacognition—that is, the process of thinking about your thinking. Your cognitive brain has a sophisticated ability to assess what you do and don’t know. You use several sources of information to make this judgment. Research by Roddy Roediger and Kathleen McDermott identified two significant sources of your judgments about whether you know something: memory and familiarity. If I ask you whether you’ve heard of Stephen Hawking, you start by trying to pull information about him from your memory. If you recall explicitly that he was a famous physicist or that he worked on black holes and had ALS, then you can judge that you’ve heard of him.



Fintech CEOs bullish on blockchain tech, give thumbs down on applications


While cryptocurrency received something of a reprieve, financial services executives this week expressed doubts about the current applications for blockchain and other distributed ledger technology. “There’s too much hype around blockchain,” said Rishi Khosla, CEO and co-founder of U.K.-based challenger bank OakNorth. “For the practicality of what’s actually been delivered so far, it is way underrated. I do believe that blockchain has a place in lending, especially when you think about sort of the whole ‘perfecting security process’. It just requires so much changing of the plumbing.” Still, some nodded favorably toward the technology’s potential impact on the industry. Securities and Exchange Commission commissioner Robert Jackson said blockchain technology can both shorten the time and lower the expense of clearing and settling trades. He also pointed to potential use cases for auditing, smart contracts and tracking and dealing with fraud.


Blockchain: A Boon for Cyber Security


Blockchain technology has impacted the cyber security industry in a few ways. The HYPR Corp is a New York based company that provides enterprises with decentralised authentication solutions, which enable consumers and employees to securely and seamlessly access mobile, Web and Internet of Things (IoT) applications. It uses blockchain technology to decentralise credentials and biometric data to facilitate risk based authentication. It invested US$ 10 million in 2018 on this platform. NuCypher is another blockchain security company which uses distributed blockchain systems proxy re-encryption. It also has an accessible control platform and uses public-key encryption to securely transfer data and enforce access requirements. Blockchain is one of the biggest tech buzzwords in the last few years, and the technology is being marketed as a cure for everything including cyber security. The US Ministry of Internal Affairs and Communications implemented a blockchain based system for processing government tenders in March 2018.



Sensory Overload: Filtering Out Cybersecurity's Noise

A good security process is extremely valuable. Regardless of the task at hand, process brings order to the chaos and minimizes the redundancy, inefficiency, and human error resulting from lack of process. On the other hand, a bad security process can have exactly the opposite effect. Processes should help and improve the security function. In order to do so, they need to be precise, accurate, and efficient. If they aren't, they should be improved by filtering out the noise and boiling them down to their essence. It's far too easy to get distracted by every new security fad that comes our way. Once in a while, an item du jour becomes something that needs to be on our radar. But most of the time, fads come and go and seldom improve our security posture. Worse, they can pull us away from the important activities that do. Many of us don't know exactly what logs and event data we will or will not need when crunch time comes. As a result, we collect everything we can get our hands on. We fill up our available storage, shortening retention and impeding performance, although we may never need 80% of what we're collecting.


The Next Big Privacy Hurdle? Teaching AI To Forget


The lack of debate on what data collection and analysis will mean for kids coming of age in an AI-driven world leaves us to imagine its implications for the future. Mistakes, accidents, teachable moments—this is how children learn in the physical world. But in the digital world, when every click, view, interaction, engagement, and purchase is recorded, collected, shared, and analyzed through the AI behemoth, can algorithms recognize a mistake and understand remorse? Or will bad behavior be compounded by algorithms that are nudging our every action and decision for their own purposes? What makes this even more serious is that the massive amount of data we’re feeding these algorithms has enabled them to make decisions experientially or intuitively like humans. This is a huge break from the past, in which computers would simply execute human-written instructions. Now, advanced AI systems can analyze the data they’ve internalized in order to arrive at a solution that humans may not even be able to understand—meaning that many AI systems have become “black boxes,” even to the developers who built them, and it may be impossible to reason about how an algorithm made or came to a certain decision.


How To Choose The Right Approach To Change Management

Our analysis shows that when you aggregate all the stages in the most popular OCM change models into a 10-stage process, none of them really cover all the bases. In fact, the analysis shows that it you choose one of these models you are likely to miss around 40 per cent of the steps suggested by other models. The analysis also shows that the biggest gap in popular change models is in ‘Assessing the Opportunity or Problem Motivating the Change’ – arguably the most critical step in OCM. ... So where do we turn when there is no real evidence to support popular change management models? Lewin did build an evidence base on a different approach to OCM. Rather than a planned approach to change, Lewin argues for a more emergent approach. He suggests that that groups or organisations are in a continual process of adaptation – there is no freezing or unfreezing. So, what are the critical success factors for creating an organisational culture that can purposefully adapt to changing environments whilst maintaining current operations?



In the drive to improve customer experience, Marketing needs to develop this single customer view, which will allow extremely targeted marketing. It does not help if copious social and historic shopping data is collated and used to build a customer persona if the customer's mobile number or email address was captured incorrectly. Likewise, duplicate records and "decayed" (out of date) data create annoyances both to the customer and to the marketing department. Much research has gone into why data is inaccurate, and the same answer is always found: it is due to human error. While human error can create the initial quality issue, for instance, when customer information is being loaded by one of the company's employees, benign neglect is also a contributor. Periodic reviews of whether customer contact details have changed are required, as well as scrupulous attention to returned emails and failed SMS messaging experienced during a marketing campaign. It is interesting to note that "Inadequate senior management support" is given as a challenge by 21% of the respondents.


How Do We Think About Transactions in (Cloud) Messaging Systems?

The baseline that we need to come from, is that everything's interconnected with everything else and users are going to expect to connect with their data and to collaborate with other users on any set of data in real time across the globe.  Messaging systems were introduced as a way of providing some element of reliable message passing over longer distances. Consider the scenario where you're transferring money from one account to another. There isn't the possibility, nor is there the desire, for any bank to lock records inside the databases of any other bank around the planet. So messaging was introduced as a temporary place that's not in your database or in my database. And then we can move the money around through these high highly reliable pipes. And each step of the journey can be a transaction: from my database to an outgoing queue, and from my outgoing queue to an intermediary queue, from one intermediary queue to another intermediary queue, from there to your incoming queue, and from your incoming queue to your database. As long each one of those steps was reliable and transactional, the whole process could be guaranteed to be safe from a business perspective.


Identity Is Not The New Cybersecurity Perimeter -- It's The Very Core

uncaptioned
It suggests that security perimeters are still effective in a cloud-native world -- and they most certainly are not. I often like to say, “If identity is the new perimeter, then Bob in accounting is the new Port 80.” In this new cloud-first world, all a hacker needs to do is get one person in an organization to click a link and it's game over. With the compromised employee’s credentials in hand, they can walk right through your defenses undetected and rob you blind. For true security in the cloud, identity needs to move to the very core of a company’s cybersecurity apparatus. That’s because when there is no more perimeter, only identity can serve as the primary control for security. As advocates of zero trust security (myself included) advise, “Don’t trust, verify.” How do you do it? Making the transition to a security model that places identity at the center involves a cultural shift that spans a company’s people, processes and technology. Here are key insights on how to get started, based on 15 years of experience helping companies turn the corner on identity-based security



Developing and Managing Change Strategies with Enterprise Architecture

The reality of most enterprises with IT portfolios consisting of > 100 IT applications is that a combination of each replacement option is technically feasible and, given the right approach, perhaps even cost-effective. And by using LeanIX, Enterprise Architects and their stakeholders can leverage collaborative mechanisms and live data to quickly evaluate technologies to see which mixture of SQL Server 2008/2008 R2 alternatives match specific business strategies and then govern the transformation projects thereafter. By linking Business Capabilities to applications, and linking those applications to technology components like SQL Server, Enterprise Architects can review Business Capability maps as seen within LeanIX Reports like the Application Matrix to align improvements with essential organizational processes. In particular, alongside a series of configurable views like “Technology Risk” and “Lifecycle”, an Application Matrix Report shows Business Capabilities and their supporting technologies across geographical user groups to help Enterprise Architects base decisions on overlapping business needs.



Quote for the day:


"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman