Friday, 19 May 2017

C2090-102 IBM Big Data Architect

Test information:
Number of questions: 55
Time allowed in minutes: 90
Required passing score: 60%
Languages: English

This test consists of 5 sections containing a total of 55 multiple-choice questions. The percentages after each section title reflect the approximate distribution of the total question set across the sections.

Requirements (16%)
Define the input data structure
Define the outputs
Define the security requirements
Define the requirements for replacing and/or merging with existing business solutions
Define the solution to meet the customer's SLA
Define the network requirements based on the customer's requirements

Use Cases (46%)
Determine when a cloud based solution is more appropriate vs. in-house (and migration plans from one to the other)
Demonstrate why Cloudant would be an applicable technology for a particular use case
Demonstrate why SQL or NoSQL would be an applicable technology for a particular use case
Demonstrate why Open Data Platform would be an applicable technology for a particular use case
Demonstrate why BigInsights would be an applicable technology for a particular use case
Demonstrate why BigSQL would be an applicable technology for a particular use case
Demonstrate why Hadoop would be an applicable technology for a particular use case
Demonstrate why BigR and SPSS would be an applicable technology for a particular use case
Demonstrate why BigSheets would be an applicable technology for a particular use case
Demonstrate why Streams would be an applicable technology for a particular use case
Demonstrate why Netezza would be an applicable technology for a particular use case
Demonstrate why DB2 BLU would be an applicable technology for a particular use case
Demonstrate why GPFS/HPFS would be an applicable technology for a particular use case
Demonstrate why Spark would be an applicable technology for a particular use case
Demonstrate why YARN would be an applicable technology for a particular use case

Applying Technologies (16%)
Define the necessary technology to ensure horizontal and vertical scalability
Determine data storage requirements based on data volumes
Design a data model and data flow model that will meet the business requirements
Define the appropriate Big Data technology for a given customer requirement (e.g. Hive/HBase or Cloudant)
Define appropriate storage format and compression for given customer requirement

Recoverability (11%)
Define the potential need for high availability
Define the potential disaster recovery requirements
Define the technical requirements for data retention
Define the technical requirements for data replication
Define the technical requirements for preventing data loss

Infrastructure (11%)
Define the hardware and software infrastructure requirements
Design the integration of the required hardware and software components
Design the connectors / interfaces / API's between the Big Data solution and the existing systems

Job Role Description / Target Audience
The Big Data Architect works closely with the customer and the solutions architect to translate the customer's business requirements into a Big Data solution. The Big Data Architect has deep knowledge of the relevant technologies, understands the relationship between those technologies, and how they can be integrated and combined to effectively solve any given big data business problem. This individual has the ability to design large-scale data processing systems for the enterprise and provide input on the architectural decisions including hardware and software. The Big Data Architect also understands the complexity of data and can design systems and models to handle different data variety including (structured, semi-structured, unstructured), volume, velocity (including stream processing), and veracity. The Big Data Architect is also able to effectively address information governance and security challenges associated with the system.

Recommended Prerequisite Skills
Understand the data layer and particular areas of potential challenge/risk in the data layer
Ability to translate functional requirements into technical specifications.
Ability to take overall solution/logical architecture and provide physical architecture.
Understand Cluster Management
Understand Network Requirements
Understand Important interfaces
Understand Data Modeling
Ability to identify/support non-functional requirements for the solution
Understand Latency
Understand Scalability
Understand High Availability
Understand Data Replication and Synchronization
Understand Disaster Recovery
Understand Overall performance (Query Performance, Workload Management, Database Tuning)
Propose recommended and/or best practices regarding the movement, manipulation, and storage of data in a big data solution (including, but not limited to:
Understand Data ingestion technical options
Understand Data storage options and ramifications (for example , understand the additional requirements and challenges introduced by data in the cloud)
Understand Data querying techniques & availability to support analytics
Understand Data lineage and data governance
Understand Data variety (social, machine data) and data volume
Understand/Implement and provide guidance around data security to support implementation, including but not limited to:
Understand LDAP Security
Understand User Roles/Security
Understand Data Monitoring
Understand Personally Identifiable Information (PII) Data Security considerations

Software areas of central focus:
BigInsights
BigSQL
Hadoop
Cloudant (NoSQL)

Software areas of peripheral focus:
Information Server
Integration with BigInsights, Balanced Optimization for Hadoop, JAQL Push down capability, etc
Data Governance
Security features of BigInsights
Information Server (MetaData Workbench for Lineage)
Optim Integration with BigInsights (archival)
DataClick for BigInsights (Future: DataClick for Cloudant - to pull operation data into Hadoop for Analytics - scripts available today)
BigMatch (trying to get to a single view)
Guardium (monitoring)
Analytic Tools (SPSS)
BigSheets
Support in Hadoop/BigInsights
Data Availability and Querying Support
Streams
Interface/Integration with BigInsights
Streaming Data Concepts
In memory analytics
Netezza
DB2 BLU
Graph Databases
Machine Learning (System ML)


QUESTION 1
What are the two levels documented in the Operational Model? (Choose two.)

A. Logical
B. Rational
C. Theoretical
D. Physical
E. Middleware

Answer: A,C


QUESTION 2
The inputs to the Architectural Overview document do NOT include which of the following?

A. Architectural Goals
B. Key Concepts
C. Architectural Overview Diagram
D. Component Model

Answer: D


QUESTION 3
The downside of cloud computing, relative to SLAs, is the difficulty in determining which of the following?

A. Root cause for service interruptions
B. Turn-Around-Time (TAT)
C. Mean Time To Recover (MTTR)
D. First Call Resolution (FCR)

Answer: A

Explanation:
References: https://en.wikipedia.org/wiki/Service-level_agreement


QUESTION 4
“The programming model for client developers will hide the complexity of interfacing to legacy systems” is an example of which of the following?

A. A use case
B. An architectural decision
C. A client imperative
D. An empathy statement

Answer: B


QUESTION 5
Which of the following is the artifact that assists in ensuring that the project is on the right path toward success?

A. Component Model
B. Empathy Map
C. Viability Assessment
D. Opportunity Plan

Answer: C

Wednesday, 10 May 2017

C2090-011 IBM SPSS Statistics Level 1 v2

C2090-011 IBM SPSS Statistics Level 1 v2

Test information:
Number of questions: 55
Time allowed in minutes: 90
Required passing score: 67%
Languages: English

Operations/Running IBM SPSS Statistics (15%)
General use
Operations
Settings
Syntax
Variables

Reading and Defining Data (16%)
Datasets
Reading data
Variables

Data Understanding/Descriptives (9%)
Crosstabs
Descriptive statistics
Dispersion
Procedure
Frequencies
Means procedure
Statistics

Data Management (15%)
Adding cases
Aggregation
Duplicate cases
SelectCases
SplitFile

Data Transformations (16%)
Categorical variables
Computing variables
Counting values across variables
Counting values across cases
IfConditions
Recoding
Variables

Output: Editing and Exploring (7%)
Charts
Exporting results
OMS
Output
Pivot Tables Editor
ScaleData::Boxplots
ScaleVariable charts
Scatterplots
TableLooks

Basic Inferential Statistics (22%)
Chi-square
Correlations
Regression
Samples
Statistics::Error
T-Test
Statistics

IBM Certified Specialist - SPSS Statistics Level 1 v2

Job Role Description / Target Audience
This certification is for individuals with a working knowledge of IBM SPSS Statistics version 15 or higher, including: analysts, statisticians, and individuals in academia, business, or research who use the IBM SPSS product. The IBM Certified Specialist - SPSS Statistics Level 1 v2 may utilize the IBM SPSS Statistics product for predictive analysis, market research and statistical research.

Requirements
This certification requires 1 test(s).

QUESTION 1
Suppose you want to copy and paste variable definitions from 481 variables in one data file to 481 corresponding variables in a second data file.
The only way to do this is to select and copy each variable’s definitions in the first file and paste to the corresponding variable in the second file.

A. True
B. False

Answer: B


QUESTION 2
In order to import data from database sources such as Access and Oracle into IBM SPSS Statistics, you must first export the data from the database to a tad file and then import this tad file into IBM SPSS Statistics?

A. True
B. False

Answer: B


QUESTION 3
Which variable name is correctly formed for use in an IBM SPSS Statistics data file?

A. 3job_categories
B. job_3_categories
C. Respondents’ age
D. Employee safety

Answer: B


QUESTION 4
Which transformation feature would you use to convert a string variable with values such as Female and Ma/e to a numeric variable with values I and 2?

A. Define Variable Properties
B. Automatic Recode
C. Visual Binning
D. Shift Values

Answer: B


QUESTION 5
Which options are available to edit bar charts? (Choose three.)

A. Changing patterns displayed in bars
B. Changing the major increment on the Y axis scale
C. Displaying data value labels
D. Changing the variable displayed on the X axis

Answer: A,B,C

Tuesday, 2 May 2017

C2070-988 IBM Case Manager V5.2 Specialist

Test information:
Number of questions: 66
Time allowed in minutes: 120
Required passing score: 63%
Languages: English

The test contains six sections, totaling 66 multiple-choice questions. The percentages after each section title reflect the approximate distribution of the total question set across the sections.

Section 1 - Planning, Architecture and Installation (28%)
Demonstrate an understanding of IBM Case Manager architecture
Identify scenarios best addressed by IBM Case Manager solutions
Identify minimum installation requirements
Identify key differences between development and production environments
Identify integrated product capabilities included in the base license
Demonstrate installation knowledge (single server and distributed environment)
Demonstrate knowledge of upgrading IBM Case Manager software

Section 2 - Configuration and Administration (includes Performance Tuning) (26%)
Demonstrate knowledge of the IBM Case Manager Configuration tool
Demonstrate knowledge of the IBM Case Manager Administration Client
Demonstrate knowledge of starting and stopping the system
Demonstrate knowledge of setting up project areas and target environments
Demonstrate knowledge of performance tuning
Demonstrate an understanding of the security model and SSO

Section 3 - Updating, Upgrading, Deploying and Testing a Case Solution (24%)
Demonstrate an understanding of the IBM Case Manager object model
Demonstrate an understanding of Case Builder and its Artifacts
Demonstrate an understanding of Case Client and IBM Content Navigator integration
Demonstrate an understanding of widgets, pages and views
Demonstrate an understanding of deploying a Case Solution
Demonstrate knowledge of side-by-side migration

Section 4 - Extending a Case Management Solution (16%)
Demonstrate and identify IBM capabilities to extend IBM Case Manager
Demonstrate knowledge of integrating IBM Forms into a solution
Demonstrate knowledge of integrating custom widgets
Demonstrate knowledge of analytics reporting using IBM Cognos Business Intelligence and Real Time Monitoring
Demonstrate knowledge of integrating with IBM Operational Decision Manager
Demonstrate knowledge of integrating with IBM Business Process Manager

Section 5 - Solution Migration (20%)
Demonstrate knowledge and understanding of the solution migration process and requirements
Demonstrate knowledge and understanding of solution migration tools
Demonstrate knowledge of configuring and modifying the target environment after solution deployment
Demonstrate knowledge of security in the target environment
Demonstrate knowledge of deployment validation procedures

Section 6 - Troubleshooting (18%)
Demonstrate knowledge of configuring and collecting log files for IBM Case Manager
Demonstrate knowledge of troubleshooting IBM Case Manager installation
Demonstrate knowledge of troubleshooting IBM Case Manager administration
Demonstrate knowledge of troubleshooting IBM Case Manager Builder
Demonstrate knowledge of troubleshooting IBM Case Manager Client

IBM Certified Specialist - Case Manager V5.2

Job Role Description / Target Audience
This intermediate level certification test certifies that the successful candidate has important knowledge, skills, and abilities necessary to plan, install, configure, troubleshoot, administer, secure and maintain IBM Case Manager V5.2. The candidate must have the skills and knowledge to successfully update and deploy IBM Case Manager solutions and migrate solutions to a production environment.

The specialist is generally self-sufficient and is able to perform most of the tasks involved in the role with limited amount of assistance from peers and vendor support services. The specialist efficiently uses product documentation.

Recommended Prerequisite Skills

Before preparing for this certification, basic understanding of the following is recommended and assumed for your environment:
working knowledge of IBM Case Manager
working knowledge of IBM Content Foundation
working knowledge of IBM Case Foundation
working knowledge of supported Application Servers
working knowledge of Content Navigator
working knowledge of databases
working knowledge of operating systems
working knowledge of LDAP
working knowledge of integrated products and software (e.g. IBM Forms, Cognos RTM and Cognos BI, IBM Content Analytics)