November 24, 2013

Add red flags to risk-based access weights in IBM Security Access ManagerMany organizations prefer a red-flag approach to risk assessment. In this approach, certain variable values in a transaction are defined as "red flags," and if any of those variable values appear, the transaction is considered risky. ... To implement red-flag risk assessment, give every red-flag variable a weight of one, and every other variable a weight of zero. Set the risk threshold level to 1%. If any of the red-flag variables are risky, the risk level will be above 0%, and, therefore, it will be considered dangerous and treated accordingly.

Analytics 3.0: Evolution
Some of us now perceive another shift, fundamental and far-reaching enough that we can fairly call it Analytics 3.0. Briefly, it is a new resolve to apply powerful data-gathering and analysis methods not just to a company’s operations but also to its offerings—to embed data smartness into the products and services customers buy. ... the first companies to perceive the general direction of change—those with a sneak peek at Analytics 3.0—will be best positioned to drive that change

Supercomputing's big problem: What's after silicon?
Supercomputing researchers aren't sure what's next. Today, supercomputing relies on architectural changes, such as adding speedy GPUs, to boost performance. Researchers may increasingly turn to chips that integrate interconnects and memory to speed processing and reduce energy. But the teams must also wrestle with the enormous costs of building -- and running -- multi-petaflop systems. "We have reached the end of the technological era," said William Gropp, chairman of the SC13 conference and a computer science professor at the University of Illinois at Urbana-Champaign.

IBM's Strategy and Direction: Analyst View
IBM is very aware of market transformation being caused by Cloud, and continues to move toward an increasingly unified, standards-based Cloud IT and business environment. These moves will serve their established partner and customer base well, and can protect IBM from significant loss in those areas. But as Cloud-native competitors continue to establish and grow their own partner/customer bases of influence, Big Blue needs to continue to consolidate, coordinate, and accelerate a Cloud-first mentality across its divisions.

Engineers Plan a Fully Encrypted Internet
The IETF change would introduce encryption by default for all Internet traffic. And the work to make this happen in the next generation of HTTP, called HTTP 2.0, is proceeding “very frantically,” says Stephen Farrell, a computer scientist at Trinity College in Dublin who is part of the project. The hope is that a specification will be ready by the end of 2014. It would then be up to websites to actually adopt the technology, which is not mandatory.

Pattern Based Requirements Model Using SysML
When we start decomposing these problems, we realize that they consist of sub-problems of similar type such as accepting input from a librarian is similar to accepting operational commands from a pilot or displaying book query result on a display is similar to displaying situation information on a display. Thus using PFs, we can effectively understand and analyze the problem and then re-use our knowledge in solving them. However, problem frames are less adapted in the industry because of the lack of standard notations and tools.

Static and dynamic testing in the software development life cycle
In the past decade, the art and practice of hacking has taken a significant turn for the worse. From the volume and complexity of attacks to the growing audience of international participants who hack for fame or fortune, hackers are modern-day pirates seeking adventure on the high seas of the Internet.
But what makes this trend even more critical is the size of the attack surface the Internet makes possible. We live in an increasingly connected world, where physical or package security is no longer the hacker's obstacle. Instead, knowledge of network protocols, applications, and an ever-growing list of exploits and utilities make up the hacker's toolkit.

The Data Scientist at Work
Data scientists need business knowledge; they need to understand the enterprise data; they need to know how to deploy technology; they have to understand statistical and visualization techniques; and, most importantly, they need to know how tointerpret the results. For example, if a discovery exercise shows that the number of storks born has a strong correlation with the number of babies born one year later, data scientists should have sufficient knowledge to conclude that these variables do not have a direct relation, but that they are both dependent on a third variable, one that probably hasn’t been included in the study yet.

Disaster Recovery Site Selection: Factors and Approach
For a DR strategy to work as per design, one of the important contributors is DR site as it will define service availability to customers during disasters. Further section details about factors to be considered for DR site selection with an approach. DR site is very crucial for any business as it will help to keep business running in adverse scenarios. DR site selection is crucial decision as it will impact availability of services to client, RPO/RTO requirements and service performance. Some of the factors which need to be considered are:

Blend Strategy & Governance To Drive Business
The role of CIO in each of these three different state is different. So, it is very important to have a clear picture of where your organisation is heading towards. Then the CIOs need to start assessing and evaluating internal capabilities to meet those goals. Once the gap analysis is done, CIOs need to make strategies to fill those gaps and identify the right partners to work with. While doing so, CIOs must put in place a robust control mechanism with full ownership of key functions associated with enterprise architecture and standards.

Quote for the day:

"If you do not know how to ask the right question, you discover nothing." -- William Edwards Deming