DkssJawaharlal Nehru Technological University - JNTU Kakinada
Assume that today's date is February 15, 2015. Robin Hood Inc. bond is an annual-coupon bond. Par value of the bond is $1,000. How much you will pay for the bond if you purchased the bond today? The answer should be calculated to two decimal places
A) Always provides greater motivation than a lower order need.
B) Never provides as much motivation as a lower order need.
C) Becomes a source of motivation after lower-order needs are satisfied.
D) Contributes directly to the physical survival of the individual.
Which of the following statements about effective leadership is the most accurate?
A) Effective leaders have the same personality traits.
B) The most effective leadership style depends on who is being led and in what situation.
C) The democratic style of leadership will almost always improve the effectiveness of the organization.
D) One trait of effective managers is that they consistently maintain the same style of leadership.
After the Civil War, the US started to industrialize quickly, and depending on who is writing about this time period, the so-called industrial titans or robber barons were in charge of this process. What were some of the prevalent theories (or, to use Hunt's term, ideologies) advanced to support the brand-new status quo? Do these concepts still exist today?
Choose an important question about your project related to the most modern and advanced blood pressure machines invented to address via prototyping and reduce project risk such as” SPECIAL BLOOD PRESSURE MACHINE PRODUCT THAT IS A WIRELESS, REMOTE MONITORING DEVICE THAT CONNECTS TO A SMARTPHONE APP WITH ACCURATE. MEASUREMENTS”. Design a model and show and I need to draw it with accompanying test(s). Construct a (more advanced) prototype and execute tests for it. Be prepared to share your prototype slides and test results in class.
Your goal is to formulate a set of psycholinguistic hypotheses that interest you and use previously collected data to investigate these hypotheses. Your hypothesis should be plausible-ish, but feel free to be creative in what you investigate. Any statistical methods are ok to use in your analysis. It’s not required that you have statistically significant results, but it is a requirement to do the analyses properly.
GIVE R CODES TO RUN
What I expect: To be written up in the form of a lab report that discusses your findings.
- It should include an introduction, hypotheses, methods (what data was used, where it came from, what statistical tests were chosen and why, etc), results, discussion (what do the results mean scientifically), and a conclusion.
- The number of tests to use, doing at least two analyses (complete with checks for assumptions).
- You should make sure to use tables and plots when appropriate. And test assumptions!
- Remember to motivate your choices and adequately illustrate your findings using plots and graphs.
2 Working with the English Lexicon Project
The English Lexicon Project (ELP) is a corpus of English lexical items with behavioral data for said items. That is, it contains accuracy and reaction times in lexical decision experiments and naming latencies in picture naming experiments. In addition to these latencies, you are also provided with the standard deviations for each word for each experiment-type, as well as the number of observations for each item. Basically, they’ve already run the experiments for you, so you just need to analyze them.
The data frame for the ELP is massive. It has 12704 observations and 24 variables. But, it’s unlikely that you’ll be using that much data. Instead, the observations and variables you use will be determined by your experimental hypothesis and the particular analysis you select.
Reading in a .csv file is slightly different than a .txt file, since it’s separated by commas and not tabs. You can use the following (remember to change file.choose() if you are on a system that uses a different command to bring up the user interface):
data <- read.csv(file.choose(), header=T)
Type head(data) to ensure you correctly loaded the file. And save your work often!
2.1 Explanation of the columns in the ELP dataset
There are a lot of columns in the ELP dataset. Here is a brief description of each column, and full descriptions can be found in the information on the ELP website.
2.1.1 Independent variables
Word The specific lexical item
Length Number of orthographic characters (letters) composing the word Freq_Hal Word frequency
Ortho_N Number of orthographic neighbours (e.g., ace = ate, act, ale, are, …) Phono_N Number of phonological neighbours (e.g., wings = wins, rings, …) Phono_N_H Number of phonological neighbours including homophones Freq_N Average frequency of an orthographic neighbourhood of a particular word
Freq_N_P Average frequency of the phonological neighbourhood of a particular word
BG_Sum Sum of the bigram count for a particular word
BG_Mean Average bigram count for a particular word
BG_Freq_By_Pos Sum of the bigram count (by position) for a particular word NPhon Number of phonemes in the standard pronunciation for a particular word2
NSyll Number of syllables in the standard pronunciation for a particular word
MorphSp Morphological (orthographic) composition of a particular word. See for a key to the symbols.
MorphPr Morphological (phonetic) composition of a particular word. See for a key to the symbols.
NMorph Number of morphemes
2.1.2 Dependent variables
Lexical decision task
I_Mean_RT Mean reaction time for a particular word (in ms) across all participants
I_SD Standard deviation of mean reaction time for a particular word (in ms) across all participants
Obs Number of observations comprising reaction time data for a particular word
I_Mean_Accuracy Average accuracy for a particular word across all participants
I_NMG_Mean_RT Mean naming latency for a particular word (in ms) across all participants
I_NMG_SD Standard deviation of mean naming latency for a particular word (in ms) across all participants
I_NMG_Obs Number of observations comprising naming latency data for a particular word
I_NMG_Mean_Accuracy Average accuracy for a particular word across all participants
You will probably want to do some serious subsetting of the data. Isolate just the rows and columns you want. Decide if you want to bin your lexical variables, e.g., create nominal categories such as high vs low frequency words, or use continuous measures, e.g., correlate your dependent variable against your independent variables. In the LADEC data, there are also rows with missing data (what R calls NA). Since many statistical tests prefer to have groups of equal sizes, you might need to remove rows with missing data. Do this using the na.omit() function, but only do this after you have selected which rows you’ll be working with. For instance, if the data was stored in d, and you only wanted columns 1, 2, 3, 40, 41, and 42, you can do
d2 <- d[,c(1,2,3,40,41,42)]
d2 <- na.omit(d2)
to save just those columns to d2, and then strip out the rows with NA values. One way to get started is to pick a column from a dataset and think about whether there are groups you could subset out, based on data from other columns. Or, you can take that column and think about how it might correlate with another column in the dataset. These are only suggestions, though. There’s a lot of data here, and lots of things one could look at. You don’t need statistically significant results to write the paper, so you can be creative in what you test.
Feel free to search the primary literature for inspiration. If the methods others use are too tricky, do not let yourself get stuck on one paper. For any particular topic, there are almost always many papers out there once you discover the hidden keywords to use (they’re most often in the introduction or background, and in the discussion).
Pick a question and remember to follow the steps before proceeding with the actual statistical tests. Remember to report your statistics in the appropriate manner, including for your tests for assumptions and for the main test you’re running. Consult the Gries book for some basic forms you can borrow from if you need help with the style for reporting results.
Other help: For help with plotting, use Quick-R ( Or, if you’re more adventurous, you could try ggplot2 ( which makes prettier plots than the standard R plotting tools.
What are some advantages to using a boxplot? What are some disadvantages?
"Describe the distribution..."
Context - What variable is being measured?
Shape - Right/left skew, symmetric, modes
Outliers - Unusual points
Center - Mean, median, general center Spread - Range, IQ, standard deviation
Describe your distribution of salaries: