Página principal

Fichero Adicional Referencias bibliográficas seleccionadas de carácter general


Descargar 151.85 Kb.
Fecha de conversión18.07.2016
Tamaño151.85 Kb.
Fichero Adicional 4. Referencias bibliográficas seleccionadas de carácter general
Exponemos referencias bibliográficas seleccionadas sobre investigación Cx. (y otras investigaciones no aleatorizadas, en adelante, y otras I.NA. tb.).

Algunas de ellas son de carácter general referidas a los tipos de diseño, características de estas investigaciones, cuidados que requieren y validez. La mayoría de ellas hacen referencia a las ciencias sociales en general o al ámbito de la educación en particular. No obstante hemos disgregado algunas referencias bibliográficas centradas en las investigaciones Cx. (y otras I.NA. tb.) en el ámbito particular de la psicología clínica, de empresa y organizaciones, de la educación, de la salud, y de las investigaciones Cx realizadas con datos secundarios o realizadas a través de la red. Finalmente se exponen algunas referencias sobre el concepto de causalidad.

Todos aquellos documentos que se han hallado en alguna página Web ha sido comprobado que es posible su extracción en julio de 2013.
Elenco:

1.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) de carácter general referidas a los tipos de diseño, características de estas investigaciones y cuidados que requieren. p.1

2.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) en el ámbito particular de la psicología clínica. p.8

3.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) en el ámbito particular de la empresa y organizaciones. p.9

4.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) en el ámbito particular de la educación. p.10

5.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) en el ámbito particular de la salud. p.12

6.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) realizadas con datos secundarios o realizadas a través de la red. p.13

7.- Referencias bibliográficas seleccionadas sobre el significado de causalidad. p.14



Referencias:
1.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) de carácter general referidas a los tipos de diseño, características de estas investigaciones y cuidados que requieren.
Abelson, R. P. (1997). On the surprising longevity of flogged horses: Why there is a case for the significance test. Psychological Science, 8, 12-15.

Aiken, L.S., West, S.G., Schwalm, D.E., Carroll, J., y Hsuing, S. (1998). Comparison of a randomized and two quasi-experimental designs in a single outcome evaluation: Efficacy of a university-level remedial writing program. Evaluation Review, 22(4), 207-244.

Anguera, M.T., Arnau, J., Ato, M., Martínez, R., Pascual, J., y Vallejo, G. (Eds.). (1995). Métodos de investigación en psicología. Madrid: Síntesis.

Ato, M. (1995). Tipología de los diseños cuasi-experimentales. En M. T. Anguera et al., (Eds.), Métodos de Investigación en Psicología, (pp. 245-266). España: Síntesis.

Ato, M. (1095). Análisis estadístico I: Diseños con variable de asignación no conocida. En M. T. Anguera et al., (Eds.), Métodos de Investigación en Psicología (pp. 271-302). España: Síntesis.

Ato, M. (1995). Análisis estadístico II: Diseños con variable de asignación conocida. En M. T. Anguera et al., (Eds.), Métodos de Investigación en Psicología (pp. 305-319). España: Síntesis.

Ato, M., y Vallejo, G. (2007). Diseños Experimentales en Psicología. Madrid: Pirámide.

Avellar, S., y Paulsell, D. (2011). Lessons Learned from the Home Visiting Evidence of Effectiveness Review. Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. Washington, DC. Disponible online en http://homvee.acf.hhs.gov/Lessons_Learned.pdf

Bandy, T., y Moore, K.A. (2011). What Works for Promoting and Enhancing Positive Social Skills: Lessons from Experimental Evaluations of Programs and Interventions. (Research-to-Results Brief). Washington, DC: Child Trends. Disponible online en http://www.childtrends.org/Files//Child_Trends_2011_03_02_RB_WWSocialSkills.pdf

Baughman, M. (2008), The influence of scientific research and evaluation on publishing educational curriculum. New Directions for Evaluation, 117, 85-94.

Berger, M.L., Mamdani, M., Atkins, D., y Johnson, M. (2009). Research practices for comparative effectiveness research: defining, reporting and interpreting non-randomized studies of treatment effects using secondary data sources. The ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part I. Value in Health, 12(8), 1044-1052.

Bloom, H.S., Michalopoulos, C., Hill, C.J., y Lei, Y. (2002). Can Nonexperimental Comparison Group Methods Match the Findings from a Random Assignment Evaluation of Mandatory Welfare-to-Work Programs?. MDRC Working Papers on Research Methodology. Disponible online en http://www.mdrc.org/publications/66/abstract.html.

Campbell, D.T. (1969). Reforms as experiments, American Psychologist, 24, 409-429.

Campbell, D.T. (1982) Can we be scientific in applied social science?. Paper presented at the Annual Meeting of the American Educational Research Association. [Reimprimido en R.F. Conner, D.G. Altman y C. Jackson (1984) Evaluation Studies Review Annual, 9, 26-48].

Campbell, D.T. (1986). Relabeling internal and external validity for applied social scientists. En W.M.K. Trochim (Ed.). Advances in quasi-experimental design and analysis. San Francisco: Jossey-Bass.

Campbell, D.T., y Fiske, D.W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105.

Christensen, L.B. (2006). Experimental methodology, (10th Ed). Boston: Allyn & Bacon.

Campbell, D.T., y Stanley, J.C. (1963). Experimental and quasi-experimental designs for research on teaching. En N.L. Gage (Ed.). Handbook of research on teaching (pp. 171-246). Chicago: Rand McNally.

Campbell, D.T., y Stanley, J.C. (1966). Experimental and cuasi experimental design for research. Chicago: Rand McNally (traducido al español como Diseños experimentales y cuasi experimentales en la investigación social. Buenos Aires: Amorrortu, 1973).

Castro, F. G., Barrera, M., y Holleran Steiker, L.K. (2010). Issues and challenges in the design of culturally adapted evidence-based interventions. Annual Review of Clinical Psychology, 6, 213-239.



Christie , C.A., y Nesbitt, D. (2010). Insight Into Evaluation Practice: A Content Analysis of Designs and Methods Used in Evaluation Studies Published in North American Evaluation-Focused Journals. American Journal of Evaluation, 31(3), 326-346.

Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003.

Cook, T.D. (2006). Describing what is special about the role of experiments in contemporary educational research. Putting the “Gold Standard” Rhetoric into Perspective. Journal of Multidisciplinary Evaluation, 6(3), 1-10.

Cook, T.D. (2007). ¿Por qué los investigadores que realizan evaluación de programas y acciones educativas eligen no usar experimentos aleatorizados? Paedagogium, 35.

Cook, T.D. (2007). Randomized experiments in education: Assessing the objections to doing them. Economics of Innovation and New Technology, 16(5), 331-355.

Cook, T.D., y Campbell, D.T. (1979). Quasi-experimentation: design and analysis issues for field settings. Chicago: Rand McNally Publishing Company.

Cook, T.D., Campbell, D.T., y Peracchio, L. (1990). Quasi Experimentation. En M.D. Dunnette y L.M. Hough (Eds.), Handbook of Industrial & Organizational Psychology, (2nd ed.). Palo Alto, CA: Consulting Psychologists Press.

Cook L., Cook B.G., Landrum, T.J., y Tankersley, M. (2008). Examining the Role of Group Experimental Research in Establishing Evidenced-Based Practices. Intervention in School and Clinic, 44(2), 76-82.

Cook, T.D., y Foray, D. (2007). Building the capacity to experiment in schools: A case study of the Institute of Educational Sciences in the U. S. Department of Education. Economics of Innovation and New Technology, 16(5), 385-402.

Cook, T. D., y Gorard, S. (2007). Where does good evidence come from? International Journal of Research and Method in Education, 30(3), 307-323.

Cook, T.D., y Payne, M.R. (2002) Objecting to the objections to using random assignment in educational research. En F. Mosteller y R.F. Boruch (eds.) Evidence Matters: Randomized Trials in Education Research. Washington, D.C.: Brookings Institution.

Cook, T.D., Scriven, M., Coryn, C.L.S., y Evergreen, S.D.H. (2010). Contemporary thinking about causation in evaluation: A dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation, 31(1), 105-117.

Cook, T.D., y Shadish, W.R. (1994). Social experiments: Some developments over the past fifteen years. Annual Review of Psychology, 45, 545-580.

Cook, T.D., Shadish, W.J., y Wong, V.C. (2008). Three conditions under which observational studies produce the same results as experiments. Journal of Policy Analysis and Management, 27(4), 724-750.

Cook, T.D., y Steiner, P.M. (2009). Some empirically viable alternatives to the randomized experiment. Journal of Policy Analysis and Management, 28(1), 165-166.

Cook, B.G., Tankersley, M., Cook, L., y Landrum, T.J. (2008). Evidence-Based Practices in Special Education: Some Practical Considerations. Intervention in School and Clinic, 44, 69-75.

Cook, B.G., Tankersley, M., y Landrum, T.J. (2009). Determining evidence-based practices in special education. Exceptional Children, 75(3), 365-383.

Cox, D. (1958). The Planning of Experiments. New York: John Wiley and Sons.

Creswell, J.W. (2005). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (2nd ed.). New York: Pearson.

Creswell, J.W. (2009). Research design. Qualitative, quantitative and mixed methods approaches. Londes: SAGE Publications.

Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and quasi-experimental designs for evaluating guideline implementation strategies. Family Practice, 17(Suppl 1), S11-6.

Davidson, E.J. (2006). The RCTs-Only Doctrine: Brakes on the Acquisition of Knowledge?. Journal of MultiDisciplinary Evaluation, 3 (6), ii-v.

Datta, L.E. (2007). What are we, chopped liver? or why it matters if the comparisons are active and what to do. The Evaluation Center, The Evaluation Café, Disponible online en http://www.wmich.edu/evalctr/wp-content/uploads/2010/05/chopped-liver.pdf

Department of Education (2005). Scientifically Based Evaluation Methods. Federal Register, 70(15), 3586-3589.

Des Jarlais, D.C., Lyles. C., y Crepaz, N. (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. American Journal of Public Health, 94, 361-366.

Donaldson, S., y Christie, C. (2005). The 2004 Claremont debate: Lipsey vs. Scriven. Journal of Multidisciplinary Evaluation, 2(3), 60-77.

Duna, W.N. (1998). The experimenting society: essays in honor of Donald T. Campbell. Thansaction Publishers. New Brunswick, New Jersey.

Eastmond, N. (1998). Commentary: When Funders Want to Compromise Your Design. American Journal of Evaluation, 19(3), 392-395.

Eggers, H.W. (2006). Planning and Evaluation: Two Sides of the Same Coin. Journal of MultiDisciplinary Evaluation, 3(6), 30-57.

Finkelstein, M., Levin, B., y Robbins, H. (1996). Clinical and prophylactic trials with assured new treatment for those at greater risk: I. A design proposal. Journal of Public Health, 86(5), 691-695.

Fitz-Gibbon, C.T., y Morris, L.L. (1978). How to design a program evaluation. Beverly Hills, CA: Sage Publications.

Flay, B.R. (1986) Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 15, 451-474.

Flay, B.R., Biglan, A., Boruch, R.F., Castro, F.G., Gottfredson, D., Kellam, S., Moscicki, E.K., Schinke, S. Valentine, J.C., y Jil, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151-175.

Fleiss, J.L. (1986). The design and analysis of clinical experiments. New York: John Wiley & Sons.

Flynn, R. J., y Bouchard, D. (2005). Randomized and quasi-experimental evaluations of program impact in child welfare in Canada: A review. Canadian Journal of Program Evaluation, 20(3), 65-100.

Gall, M. D., Gall, J. P., & Borg, W. R. (2006). Educational research: An introduction (8th ed.). Boston: Allyn & Bacon.

Gamse, B. C., y Singer, J. D. (2005). Lessons from the Red Sox playbook. Harvard Education Letter, 21(1), 7-8.

Gersten R., y Edyburn, D. (2007). Defining quality indicators for special education technology research. Journal of Special Education Technology, 22(3), 3-18.

Gersten, R., Fuchs, L., Compton, D., Coyne, M., Greenwood, C., y Innocenti, M.S. (2005). Quality indicators for group experimental and quasi-experimental research in special education. Exceptional Children, 71(2), 149-164.

Green, J. (2010). Points of Intersection between Randomized Experiments and Quasi-Experiments The ANNALS of the American Academy of Political and Social Science 628(1), 97-111.

Grimshaw, J., Campbell, M., Eccles, M., y Steen, N. (2000). Experimental and quasi-experimental designs for evaluating guideline implementation strategies. Family Practice, 17(Suppl 1), S11-S18.

Gorard, S., y Cook, T. (2007). Where does good evidence come from?. International Journal of Research & Method in Education, 30(3), 307-323.

Hamilton, J., y Bronte-Tinkew, J. (2007). Logic models in out-of-school time programs. (Research-to-Results Brief). Washington, DC: Child Trends. Disponible en http://www.childtrends.org/Files//Child_Trends-2007_01_05_RB_LogicModels.pdf.

Hannah, R.R., y Brad, B.J. (2012). Publication bias in psychological science: Comment on Ferguson and Brannick. Psychological Methods, 17(1), 129-136.

Harris, A.D., Bradham, D.D., Baumgarten, M., Zuckerman, I.H., Fink, J.C., y Perencevich, E.N. (2004). The use and interpretation of quasi-experimental studies in infectious diseases. Clinical Infectious Diseases, 38, 1586-1591.

Heckman, J.J. (1989). Causal Inference and Nonrandom Samples. Journal of Educational Statistics, 14(2), 159-68.

Heckman, J. (1992). Randomization and social policy evaluation. En C. Manski y I. Garfinkle, (eds.). Evaluations: Welfare and Training Programs (pp- 201-230). Cambridge: Harvard University Press, 201-230.

Heckman, J.J., y Hotz, V.J. (1989). Choosing among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs: The Case of Manpower Training. Journal of the American Statistical Association, 84(408), 862-74.

Horn, S.D., DeJong, G., y Deutscher, D. (2012). Practice-based evidence research in rehabilitation: an alternative to randomized controlled trials and traditional observational studies. American Journal of Physical Medicine & Rehabilitation 93(8 Suppl), S127-37.

Human Resources Development Canada. (1998), Quasi-Experimental Evaluation, Evaluation and Data Development, Strategic Policy, SPAH053E- 01-98. Disponible online en http://www.hrsdc.gc.ca/en/cs/sp/sdc/evaluation/spah053e/page00.shtml.

Hunter, D.E.K. (2006). Daniel and the rhinoceros. Evaluation and Program Planning, 29, 180-185.

Institute of Education Sciences (2003). What Works Clearinghouse study review standards. Disponible online en http://www.whatworks.ed.gov/reviewpro-cess/study_standards_final.pdf



Johnston, M.V., Ottenbacher K.J., y Reichardt C.S. (1995). Strong quasi-experimental designs for research on the effectiveness of rehabilitation. American Journal of Physical Medicine & Rehabilitation,74(5), 383-92.

Judd, C M., y Kenny, D.A. (1981). Estimating the effect of social interventions. New York: Cambridge University Press.

Kaplan, D. (Ed.) (2004). The SAGE Handbook of Quantitative Methodology for the Social Science. Thousand Oaks, CA: Sage Publications.

Kerlinger, F.N., y Lee, H.B. (2000). Foundations of behavioral research (4th Ed.). Fort

Worth, TX: Harcourt.

Lipsey, M.W., y Cordray, D.C. (2000). Evaluation Methods for Social Intervention. Annual Review of Psychology, 51, 345-375.

Lipsey, M.E., y Wilson, D.B. (1993). The Efficacy of Psychological, Educational, and Behavioral Treatment: Confirmation from Meta-Analysis. American Psychologist, 48(12), 1181-1209.

Leedy, P.D., y Ormrod, J.E. (2010). Practical research: Planning and design (9th ed.). Upper Saddle River, NJ: Prentice Hall.

Mabry, L. (2008), Consequences of No Child Left Behind on evaluation purpose, design, and impact. New Directions for Evaluation, 117, 21–36.

Marcantonio, R.J., y Cook, T.D. (1994). Convincing quasi-experiments: The interrupted time series and regression-discontinuity designs. En J.S.Wholey, H.P. Hatry, y K.E. Newcomer (Eds.), Handbook of practical program evaluation (pp. 133-154). San Francisco: Jossey-Bass.

Mark, M.M., y Cook, T.D. (1984). Design of randomized experiments and quasi-experiments. En L. Rutman (Ed.), Evaluation research methods: A basic guide (2nd ed., pp. 65-120). Beverly Hills, CA: Sage Publications.

Moore, K.A. (2008). Quasi-experimental evaluations. (Research-to-Results Brief). Washington, DC: Child Trends. Disponible online en http://www.childtrends.org/Files/Child_Trends-2008_01_16_Evaluation6.pdf.

Morgan, S.L., y Winship, C. (2007). Counterfactuals and causal inference: Methods and principles for social research. New York: Cambridge University Press.

Orwin, R., Cordray, D., y Huebner, R.N. (1994). Judicious Application of Randomized Designs. In K. J. Conrad, Critically Evaluating the Role of Experiments. New Directions in Evaluation, 63, 73-86.

Parker, R.N., Asencio, E.K., y Plechner, D. (2006). How Much of a Good Thing is Too Much? Explaining the Failure of a Well-Designed, Well-Executed Intervention in Juvenile Hall for ‘Hard-to-Place” Delinquents. New Directions for Evaluation, 110, 45-57.

Peck, L.R., Kim, Y., y Lucio, J. (2012). An Empirical Examination of Validity in Evaluation. American Journal of Evaluation, 33, 350-365.

Pedhazur, E.J., y Schmelkin, L. P. (1991). Measurement, design, and analysis: An integrated approach. Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.

Pohl, S., Steiner, P.M., Eisermann, J., Soellner, R., y Cook, T.D. (2009). Unbiased causal inference from an observational study: Results of a within-study comparison. Educational Evaluation and Policy Analysis, 31(4), 463–479.

Reeves, B.C., y Gaus, W. (2004). Guidelines for reporting non-randomised studies. Forsch Komplementarmed Klass Naturheilkd, 11 (Suppl 1), 46-52.

Reichardt, C.S. (2011). Evaluating Methods for Estimating Program Effects. American Journal of Evaluation, 32(2), 246-272.

Reichardt, C.S. (2011), Criticisms of and an alternative to the Shadish, Cook, and Campbell validity typology. New Directions for Evaluation 130, 43-53.

Robins, J. (2001). Data, design, and background knowledge in etiologic inference. Epidemiology, 12, 313-320.

Rossi, P.H., Lipsey, M.W., y Freeman, H.E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: SAGE Publications. London: Sage Publications.

Sanders, J.R. (2006). Ten Things Evaluation Needs: An Evaluation Needs Assessment. Journal of MultiDisciplinary Evaluation, 3(6), 58-59.

Scheirer, M.A. (1998). Commentary: Evaluation Planning is The Heart of the Matter. American Journal of Evaluation, 19(3), 385-391.

Schulz, R., Czaja, S.J., McKay, J.R., Ory, M.G. y Belle, S.H. (2010). Intervention Taxonomy (ITAX): Describing Essential Features of Interventions (HMC). American journal of health behavior, 34(6): 811–821.

Scriven, M. (2006). Converting Perspective to Practice. Journal of MultiDisciplinary Evaluation, 3(6), 8-9.

Scriven, M. (2007). The logic of evaluation. En H.V. Hansen, et. al. (Eds), Dissensus and the Search for Common Ground (pp. 1-16). Windsor, ON: OSSA.

Shadish, W.J., y Cook, T.D. (2009). The renaissance of experiments. Annual Review of Psychology 60, 607-629.

Shadish, W.R., Cook, T.D., y Campbell, D.T. (2002). Experimental and quasiexperimental designs for generalized causal inference. Boston: Houghton Mifflin, 2002.

Shadish, W.R., y Heinsman, D.T. (1997). Experiments versus quasi-experiments: do they yield the same answer?. NIDA Research Monograps, 170, 147-164.

Shadish, W. J., y Cook, T.D. (2009). The renaissance of of field experimentation in evaluating interventions. Annual Review of Psychology, 60,607-29.

Shadish, W.R., y Myers, D. (2004). Research Design Policy Brief. Documento elaborado para the Campbell Collaboration. Disponible online en http://www.campbellcollaboration.org/artman2/uploads/1/C2_Research_Design_Policy_Brief-2.pdf.

Shadish, W.R., y Myers, D. (2004). How to make a Campbell Collaboration Review: The Review. Documento elaborado para Nordic Campbell Center (The DNIHSR). Disponible online en http://www.sfi.dk/graphics/Campbell/Dokumenter/For_Forskere/guide_3_review_samlet20DEC04.pdf.

Shadish, W.R., Newman, D.L., Scheirer, M.A., y Wye, C. (Eds.), (1995). Guiding principlesfor evaluators (New Directions for Program Evaluation, San Francisco: Jossey- Bass.

Shadish, W.R., y Ragsdale, K. (1996). Random versus nonrandom assignment in controlled experiments: Do you get the same answer?. Journal of Consulting and Clinical Psychology, 64(6), 1290-1305

Singleton, R.A., y Straits, B.C. (2004). Approaches to Social Research (4th ed., pp. 43-75). Oxford: Oxford University Press.

Slavin, R.E. (2008). What works? Issues in synthesizing educational program evaluations. Educational Researcher, 37, 5-14.

Sloane, F. (2008). Comments on Slavin: Through the Looking Glass: Experiments, Quasi-Experiments, and the Medical Model. Educational Researcher, 37, 41-46

Society for Prevention Research. (2010). Standards of evidence: Criteria for efficacy, effectiveness, and dissemination. Disponible online en http://www.preventionresearch.org/StandardsofEvidencebook.pdf.

Song, M., y Herman, R. (2010). Critical Issues and Common Pitfalls in Designing and Conducting Impact Studies in Education: Lessons Learned From the What Works Clearinghouse (Phase I). Educational Evaluation and Policy Analysis September, 32, 351-371.

Sridharan, A. y Nakaima, A. (2011). Then steps to making evaluation matter. Evaluation and Program Panning, 34(2), 135-146.

Steiner, P.M., Wroblewski, A., y Cook, T.D. (2009). Randomized Experiments and Quasi-Experimental Designs in Educational Research. En K. Ryan y B.J. Cousins (Eds.), The Handbook of International Education (pp. 75–95). London, UK: Sage Publications.

Subhrendu, P. (2009). Rough guide to impact evaluation of environmental and development programs. SANDEE working paper / South Asian Network for Development and Environmental Economics (SANDEE), no. 40-09. Disponible en http://idl-bnc.idrc.ca/dspace/bitstream/10625/41844/1/129483.pdf.

Thompson, B., Diamond, K., McWilliam, R., Snyder, P., y Snyder, S. (2005). Evaluating the quality of evidence from correlational research for evidence-based practice. Exceptional Children, 71, 181-194.

Trochim, W.M.K. (2001). The research methods knowledge base. Cincinnati: Atomic Dog Publishing.

Valentine, J.C., y Cooper, H. (2003). What Works Clearinghouse Study Design and Implementation Assessment Device (Version 1.0). Washington, DC: U.S. Department of Education. Disponible online en http://www.w-w-c.org/standards.html.

Valentine, J.C., y Cooper, H. (2008). A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: the Study Design and Implementation Assessment Device (Study DIAD). Psychological Methods, 13(2), 130-49.

van der Laan, M.J., y Rose, S. (2011). Targeted Learning: Causal Inference for Observational and Experimental Data. New York: Springer.

Wasserman, L. (2004). All of Statistics: A Concise Course in Statistical Inference. New York : Springer Science Business Media.

West, S.G., Duan, N., Pequegnat, W., Gaist, P., Des Jarlais, D.C., Holtgrave, D., Szapocznik, J., Fishbein, M., Rapkin, B., Clatts, M., y Mullen, P.D. (2008). Alternatives to the Randomized Controlled Trial. American Journal of Public Health, 98 (8), 1359-1366.

Wu, C.F.J., y Hamada, M. (2000). Experiments: Planning, analysis, and parameter design optimization. New York: John Wiley & Sons.

2.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) en el ámbito particular de la psicología clínica.
American Psychological Association. (2002). Criteria for evaluating treatment guidelines. American Psychologist, 57, 1052-1059.

APA Presidential Task Force on Evidence-Based Practice. (2006). Evidence-based practice in psychology. American Psychologist, 61, 271-285.

Barlow, D.H. (2004). Psychological treatments. American Psychologist, 59, 869-879.

Carpinello, S.E., Rosenberg, L., Stone, J., Schwager, M., y Felton, C.J. (2002). New York State’s campaign to implement evidence-based practices for people with serious mental disorders. Psychiatric Services, 53, 153-155.

Levant, R.F. (2004). The empirically validated treatments movement: A practitioner/educator perspective. Clinical Psychology: Science and Practice, 11, 219-224.

Ollendick, T.H., y Davis, T.E. (2004). Empirically supported treatments for children and adolescents: Where to from here? Clinical Psychology: Science and Practice, 11, 289-293.

Ollendick, T.H., y King, N.J. (2004). Empirically supported treatments for children and adolescents: Advances toward evidence-based practice. En P. M. Barrett y T. H. Ollendick (Eds.), Handbook of interventions that work with children and adolescents: Prevention and treatment (pp.3–25). Chichester, West Sussex, England: Wiley.

Ruscio, A.M., y Holohan, D.R. (2006). Applying empirically supported treatments to complex cases: Ethical, empirical, and practical considerations. Clinical Psychology: Science and Practice, 13, 146-162.

Shadish, W.R., Matt, G.E., Navaro, A.M., y Phillips, G. (2000). The effects of psychological therapies under clinically representative conditions: A meta-analysis. Psychological Bulletin, 126, 512-529.

Shapiro, J.P. (2009). Integrating Outcome Research and Clinical Reasoning in Psychotherapy Planning. Professional Psychology: Research and Practice, 40(1), 46-53.

Westen, D., Novotny, C.M. y Thompson-Brenner, H. (2005). EBP / EST: Reply to Crits-Christoph et al. (2005) and Weisz et al. (2005). Psychological Bulletin, 131, 427-433.
3.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) en el ámbito particular de la empresa y organizaciones.
Andrews, K.M., y Delahaye, B.L. (2000). Influences on knowledge processes in organizational learning: The psychological filter. Journal of Management Studies, 37, 796-810.

Barley, S.R. (2006). When I write my masterpiece: Thoughts on what makes a paper interesting. Academy of Management Journal, 49, 16-20.

Bartunek, J.M., Rynes, S.L., y Ireland, R.D. (2006). Editors’ forum: What makes management research interesting, and why does it matter? The Academy of Management Journal, 49, 9-15.

Buchel, B. ( 2000). Framework of joint venture development: Theory-building through quantitative research. Journal of Management Studies, 37, 55-83.

Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45, 1304-1312.

Cohen, J. (1992). Cosas que he aprendido (hasta ahora). Anales de Psicología, 8(1-2), 3-17.

Cook, T.D., y Campbell, D.T. (1976). The design and conduct of quasi-experiments and true experiments in field settings. En M. D. Dunnette (Ed.), Handbook of industrial and organizational psychology (pp. 223-326). New York: John Wiley and Sons.

Cox, J.W., y Hassard, J. (2005). Triangulation in Organizational Research: A Re-Presentation. Organization, 12, 109-133,

Deeks, J.J., Dinnes, J., D’Amico, R., Sowden, A.J., Sakarovitch, C., Song, F., et al. Evaluating non-randomised intervention studies. Health Technology Assessment 2003, 7(27).

Easterby-Smith, M., Golden-Biddle, K., y Locke, K. ( 2008). Working with pluralism: Determining quality in qualitative research. Organizational Research Methods, 11, 419-429.

Edmondson, A.C., y McManus, S.E. (2007). Methodological fit in management field reseach. Academy of Management Review, 32, 1155-1179.

Fendt, J., y Sachs, W. ( 2008). Grounded theory method in management research: Users’ perspectives. Organizational Research Methods, 11, 430-455.

Frick, R. W. (1995). Accepting the null hypothesis. Memory & Cognition, 23, 132-138.

Gephart, R.P. ( 2004). Qualitative research and the academy of management journal. Academy of Management Journal, 47, 454-462.

Gibbert, M., y Ruigrok, W. (2010). The ‘‘What’’ and ‘‘How’’ of Case Study Rigor: Three Strategies Based on Published Work. Organizational Research Methods, 13(4), 710-737.

Gibbert, M., Ruigrok, W., y Wicki, B. ( 2008). What passes as a rigorous case study?. Strategic Management Journal, 29, 1465-1474.

Grant, A.M., y Wall, T.D. (2009) . The Neglected Science and Art of Quasi-Experimentation Why-to, When-to, and How-to Advice for Organizational Researchers. Organizational Research Methods, 12, 653-686.

Greenberg, J., y Tomlinson, E. C. (2004). Situated experiments in organizations: Transplanting the lab to the field. Journal of Management, 30, 703-724.

Harcum, E. R. (1990). Methodological vs. empirical literature: Two views on the acceptance of the null hypothesis. American Psychologist, 45, 404-405.

Highhouse, S. (2009). Designing experiments that generalize. Organizational Research Methods. Organizational Research Methods, 12(3), 584-563.

Hollenbeck, J. R. (2002). Quasi-experimentation and applied psychology: Introduction to a special issue of Personnel Psychology. Personnel Psychology, 55, 587-588.

Judge, D.A., Cable, D.A., Colbert, A.E., y Rynes, S.L. (2007). What causes a management article to be cited? Article, author, or journal?. Academy of Management Journal, 50, 491-506.

Kunstmann, L.N y Merino, E.J.M. (2008). El experimento natural como un nuevo diseño cuasi-experimental en investigación social y de salud. Ciencia y Enfermería, XIV, (2), 9-12.

Lawler, E.E. (1977). Adaptive experiments: An approach to organizational behavior research. Academy of Management Review, 2, 576-585.

Locke, K., Golden-Biddle, K., y Feldman, M.S. (2008). Making doubt generative: Rethinking the role of doubt in the research process. Organization Science, 19, 907-918.

Mohrman, S.A. Lawler, E.E., y Associates (2011). Useful research: Advancing theory and practice. San Francisco: Berrett-Koehler.

Paluck, E.L., y Green, D.P. (2009). Prejudice Reduction: What Works? A Review and Assessment of Research and Practice. Annual Review of Psychology, 60, 339-367

Peters, T.J., y Waterman, R.H. (1982). In search of excellence: Lessons from America’s best-run companies. New York : Harper & Row.

Pratt, M. (2000). The good, the bad, and the ambivalent: Managing identification among Amway distributors. Administrative Science Quarterly, 45, 456-493.

Scandura, T.A. y Williams E.A. (2000). Research methodology in management: Current practices, trends, and implications for future research. Academy of Management Journal, 43(6), 1248-1264.

Walker, K.E., Moore, K.A. (2011). Performance management and evaluation: what’s the difference?. (Research-to-Results Brief). Washington, DC: Child Trends. Disponible online en http://www.childtrends.org/Files/Child_Trends-2011_01_19_RB_PerformMgmt.pdf
4.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) en el ámbito particular de la educación.
Anderson, T. (2005). Design-based research and its application to a call center innovation in distance education. Canadian Journal of Learning and Technology, 31(2), 69-84.

Anderson, T., y Shattuck, J. (2012). Design-Based Research : A Decade of Progress in Education Research?. Educational Researcher, 41, 16-25.

Brown, A. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. Journal of the Learning Sciences, 2(2), 141-178.

Cobb, P., Confrey, J., diSessa, A., Lehrer, R., y Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9-13.

Collins, A., Joseph, D., y Bielaczyc, K. (2004). Design research: Theoretical and methodological issues. Journal of the Learning Sciences, 13(1), 15-42.

Conceicao, S., Sherry, L., y Gibson, D. (2004). Using developmental research to design, develop and evaluate an urban education portal. Journal of Interactive Learning Research, 15(3), 271-286.

Cook, T.D. (2006). Describing what is special about the role of experiments in contemporary educational research. Putting the “Gold Standard” Rhetoric into Perspective. Journal of Multidisciplinary Evaluation, 3(6), 1-7.

Cook, T.D. (2007). Randomized experiments in education: Assessing the objections to doing them. Economics of Innovation and New Technology, 16(5), 331-355.



Cook L., Cook B.G., Landrum, T.J., y Tankersley, M. (2008). Examining the Role of Group Experimental Research in Establishing Evidenced-Based Practices. Intervention in School and Clinic, 44(2), 76-82.

Cook, T.D., y Foray, D. (2007). Building the capacity to experiment in schools: A case study of the Institute of Educational Sciences in the U. S. Department of Education. Economics of Innovation and New Technology, 16(5), 385-402.

Cook, T.D., y Gorard, S. (2007). Where does good evidence come from? International Journal of Research and Method in Education, 30(3), 307-323.

Cook, T.D., y Payne, M.R. (2002). Objecting to the objections to using random assignment in educational research. En F. Mosteller y R.- F. Boruch (eds.) Evidence Matters: Randomized Trials in Education Research. Washington, D.C.: Brookings Institution.

Cook, B.G., Tankersley, M., Cook, L., y Landrum, T.J. (2008). Evidence-Based Practices in Special Education: Some Practical Considerations. Intervention in School and Clinic, 44, 69-75.

Creswell, J.W. (2005). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (2nd ed.). Upper Saddle River, NJ: Pearson.

Department of Education (2005). Scientifically Based Evaluation Methods. Federal Register, 70(15), 3586-3589.

Donaldson, S., y Christie, C. (2005). The 2004 Claremont debate: Lipsey vs. Scriven. Journal of Multidisciplinary Evaluation, 3, 60-66.

Gersten, R., Fuchs, L., Compton, D., Coyne, M., Greenwood, Ch., y Innocenti M.S. (2005). Quality indicators for group experimental and quasi experimental research in special education. Exceptional Children, 71(2), 149-164.

Herrington, J., McKenney, S., Reeves, T.C., y Oliver, R. (2007). Design-based research and doctoral students: Guidelines for preparing a dissertation proposal. Disponible online en www.editlib.org/d/25967/proceeding_25967.pdf.

Jahnke, I. (2010). Dynamics of social roles in a knowledge management community. Computers in Human Behavior, 26(4), 533-546.

Lipsey, M.W., y Cordray, D.C. (2000). Evaluation Methods for Social Intervention. Annual Review of Psychology, 51, 345-375.

Lipsey, M.W, y Wilson, D.B. (1993). The efficacy of psychological, educational and behavioural treatment. Confirmation from meta-analysis. American Psychologist, 48, 1181-1209.

Oha, E., y Reeves, T. (2010). The implications of the differences between design research and instructional systems design for educational technology researchers and practitioners. Educational Media International, 4(47), 263-275.

Reeves, T. (2000, April). Enhancing the worth of instructional technology research through “design experiments” and other developmental strategies. Paper presented at the American Educational Research Association Annual Meeting. Disponible online en http://itech1.coe.uga.edu/~treeves/AERA2000Reeves.pdf.

Reeves, T.C., Herrington, J., y Oliver, R. (2005). Design research: A socially responsible approach to instructional technology research in higher education. Journal of Computing in Higher Education, 16(2), 96-115.

Stappers, P.J. (2007). Doing design as a part of doing research. In R.Michel (Ed.), Design research now (pp. 81-91). Basel, UK: Birkhauser.

Tiberghien, A., Vince, J., y Gaidioz, P. (2009). Design-based research: Case of a teaching sequence on mechanics. International Journal of Science Education, 31(17), 2275–2314



5.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) en el ámbito particular de la salud.

Bekkering, G.E., y Kleijnen, J. (2008). Procedures and methods of benefit assessments for medicines in Germany. The European Journal of Health Economics, Suppl 1, 5-29.


Chambers, D., y Wilson P. (2012). A framework for production of systematic review based briefings to support evidence-informed decision-making. Systematic Reviews,1(1), 32.

Connelly, J.B. (2007). Evaluating complex public health interventions: theory, methods and scope of realist enquiry. Journal of Evaluation in Clinical Practice, 13(6),935-41.

Daya, S. (2003). Characteristics of good causation studies. Seminars in Reproductive Medicine, 21(1),73-83.

Ferguson, L. (2004). External validity, generalizability, and knowledge utilization. Journal of Nursing Scholarship, 36(1), 16-22.

Glasgow, R.E y Emmons, K.M. (2007). How can we increase translation of research into practice? Types of evidence needed. Annual Review of Public Health, 28, 413-33.

Green, L.W y Glasgow, R.E. (2006). Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Evaluation & the Health Professions, 29(1), 126-53.

Johnston, M.V., Vanderheiden, G.C., Farkas, M.D., Rogers, E.S., Summers, J.A., y Westbrook, J.D., for the NCDDR Task Force on Standards of Evidence and Methods. (2009). The challenge of evidence in disability and rehabilitation research and practice: A position paper. Austin: SEDL. Disponible online en http://www.ncddr.org/kt/products/tfpapers/tfse_challenge/.

Li., LC., Moja, L., Romero, A., Sayre, E.C., y Grimshaw, J.M. (2009). Nonrandomized quality improvement intervention trials might overstate the strength of causal inference of their findings. Journal of Clinical Epidemiology, 62(9), 959-66.

Morales, J.M., Gonzalo, E., Martín, F.J., y Morilla, J.C. (2008). Evidence Based Public Health: resources on effectiveness of community interventions. Revista Española de Salud Pública, 82(1), 5-20.

Shahar, E., y Shahar, D.J. (2009). On the causal structure of information bias and confounding bias in randomized trials. Journal of Evaluation in Clinical Practice, 15(6), 1214-6.

Sox, H.C., Helfand, M., Grimshaw, J., Dickersin, K., Tovey, D., Knottnerus, J.A., y Tugwell, P. (2010). Comparative effectiveness research: Challenges for medical journals. Trials, 11:45.



Steckler A, McLeroy KR. (2008). The importance of external validity. American Journal of Public Health, 98(1), 9-10.

Tovey, D., y Dellavalle, R. (2010). Cochrane in the United States of America [editorial]. Cochrane Database Systematic Reviews,  ED000010, http://www.thecochranelibrary.com/details/editorial/847239/Cochrane-in-the-United-States-of-America-by-Dr-David-Tovey--Dr-Robert-Dellavalle.html

Victora, C.G., Habicht, J.P., y Bryce, J. (2004). Evidence-based public health: moving beyond randomized trials. American Journal of Public Health, 94(3), 400-405.

Vlassov, V., y Groves, T. (2010). The role of Cochrane Review authors in exposing research and publication misconduct. Cochrane Database Systematic Reviews, ED000015, http://www.thecochranelibrary.com/details/editorial/886689/The-role-of-Cochrane-Review-authors-in-exposing-research-and-publication-miscond.html.

Wunsch, H., Linde-Zwirble, W.T., y Angus, D.C. (2006). Methods to adjust for bias and confounding in critical care health services research involving observational data. Journal of Critical Care, 21(1), 1-7.

Zwerling, C., Daltroy, L.H., Fine, L.J., Johnston, J.J., Melius, J., y Silverstein, B.A. (1997). Design and conduct of occupational injury intervention studies: a review of evaluation strategies. American Journal of Industrial Medicine, 32, 164-79

6.- Referencias bibliográficas seleccionadas sobre investigación Cx. (y otras I.NA. tb.) realizadas con datos secundarios o realizados a través de la red.
Jain, S., Chen, Y., y Parkes, D.C. (2009). Designing incentives for online question and answer forums. En EC '09: Proceedings of the Tenth ACM Conference on Electronic Commerce (pp.129-138). New York, ACM.

Jensen, D. (2008). Beyond prediction: Directions for probabilistic and relational learning. Lecture Notes in Computer Science 4894, 4-21. Berlin. 17th International Conference on Inductive Logic Programming (pp. 4-21).

Jensen, D.D., Fast, A.S., Taylor, B.J., y Maier, M.E. (2008). Automatic Identification of Quasi-Experimental Designs for Discovering Causal Knowledge. 14th ACM SIGKDD, International Conference on Knowledge and data mining (pp. 372-380). Las Vegas, NV, USA.

Jensen, D.D., Fast, A.S., Taylor, B.J., Maier, M.E., y Rattigan, M. (2008). Automatic Identification of Quasi-Experimental Designs for Scientific Discovery. Association for the Advancement of Artificial Intelligence (www.aaai.org).

Karimi, K., y Hamilton, H. (2003). Distinguishing causal and acausal temporal relations. The Seventh Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'2003) (pp. 234-240). Seoul, South Korea.

Kohavi, R. Crook, T., y Longbotham, R. (2009). Online experimentation at Microsoft. In Proc. of the Third Workshop on Data Mining Case Studies.

Kohavi, R,, Longbotham, R., Sommer, D. y Henne, R.M. (2009). Controlled experiments on the web: Survey and practical guide. Data Mining and Knowledge Discovery, 18(1), 140-181.

Maia, M., Almeida, J., y Almeida, V. (2008). Identifying user behavior in online social networks. En SocialNets '08: Proceedings of the 1st Workshop on Social Network Systems, 1, 6, New York, ACM.



Nam, K.K., Ackerman, M.S. ,y Adamic, L.A. (2009). Questions in, knowledge in? A study of Naver's question answering community. En Proc. of the 27th International Conference on Human Factors in Computing Systems, (pp. 779-788), Boston, MA.

Oktay, H., Taylor, B.J., y Jensen, D.D. (2010). Causal Discovery in Social Media using quasi-experimental designs. 1st Workshop on Social Media Analytics (SOMA ’10), Washington, DC, USA.

Singh, V.K., Jain, R., y Kankanhalli, M.S. (2009). Motivating contributors in social media networks. In WSM '09: Proceedings of the First SIGMM Workshop on Social Media, New York, AC.

Walker, K.E., y Moore, K.A (2011). Performance Management and Evaluation: What's the Difference. (Research-to-Results Brief). Washington, DC: Child Trends. Disponible online en http://www.childtrends.org/Files//Child_Trends-2011_01_19_RB_PerformMgmt.pdf

7.- Referencias bibliográficas seleccionadas sobre el concepto de causalidad.

Behi, R., y Nolan, M. (1996). Causality and control: key to the experiment. British Journal of Nursing, 5(4), 52-55.

Bergsma, W., Croon, M., y Hagenaars, J.A. (2009). Causal analysis: structural equation models and (quasi-) experimental design. Statistics for Social and Behavioral Sciences, 155-190.

Cole, P. (1997). Causality in epidemiology, health policy, and law. Journal of Marketing Research, 27, 10279-10285.



Cook, T.D., Scriven, M., Coryn, C.L.S., y Evergreen, S.D.H. (2010). Contemporary thinking about causation in evaluation: a dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation, 31(1), 105-117.

Cox, D., y Wermuth, N. (2004). Causality: A statistical view. International Statistical Review, 72, 285-305.

Davidson, E.J. (2006, November). Causal inference nuts and bolts. Demonstration session at the American Evaluation Association conference, Portland, OR. Disponible online en http://davidsonconsulting.co.nz/

Dawid, A. (1979). Conditional independence in statistical theory. Journal of the Royal Statistical Society, Series B, 41, 1-31.

Heckman, J. (2008). Econometric causality. International Statistical Review, 76, 1-27.

Holland, P. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81(396), 945-960.

Holland, P. y Rubin, D. (1988). Causal inference in retrospective studies. Evaluation Review 12, 203-231.

Imai, K., Luke, K., Tingley, D., y Yamamoto, T. (2011). Unpacking the Black Box of Causality: Learning about Causal Mechanisms from Experimental and Observational Studies. American Political Science Review, 105(4), 765-789.

Kiiveri, H., Speed, T., y Carlin, J. (1984). Recursive causal models. Journal of Australian Math Societyy 36, 30-52.

Lauritzen, S. (2004). Discussion on causality. Scandinavian Journal of Statistics, 31, 189-192.

Lindley, D. (2002). Seeing and doing: The concept of causation. International Statistical Review, 70, 191-214.

Maxwell, S.E. (2010). Introduction to the special section on Campbell’s and Rubin’s conceptualizations of causality. Psychological Methods 15(1), 1-2.

Morgan, S., y Winship, C. (2007). Counterfactuals and Causal Inference: Methods and Principles for Social Research (Analytical Methods for Social Research). New York: Cambridge University Press.

Pearl, J. (1993). Comment: Graphical models, causality, and intervention. Statistical Science, 8, 266-269.

Pearl, J. (1995). Causal diagrams for empirical research. Biometrika, 8(2), 669-710.

Pearl, J. (2003). Statistics and causal inference: A review. Test Journal, 12, 281-345.

Pearl, J. (2009). Causality: Models, Reasoning, and Inference (2nd ed.). New York: Cambridge University Press.

Pearl, J. (2009). Causal inference in statistics: An overview. Statistics Surveys,3,96-146.

Pearl, J. (2010). An introduction to causal inference. The International Journal of Biostatistics, 6 (2), art.7.

Rothman, K. (1976). Causes. American Journal of Epidemiology, 104, 587-592.

Rubin, D. B. (1990). Formal modes of statistical inference for causal effects. Journal of Statistical Planning and Inference, 25, 279-292.

Rubin, D. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100, 322-331.

Shadish, W.R. (2010). Campbell and Rubin: A primer and comparison of their approaches to causal inference in field settings. Psychological Methods 15, 3-17.

Schlotter, M., Schwerdt, G., y Woessmann, L. (2011). Econometric methods for causal evaluation of education policies and practices: a non-technical guide. Education Economics, 19(2), 109-137.

Scriven, M. (1974). Maximizing the power of causal investigations: The modus operandi method. En W. J. Popham (Ed.), Evaluation in education: Current applications (pp. 68-84). Berkeley, CA: McCutcheon Publishing.

Spirtes, P., Glymour, C., y Scheines, R. (2000). Causation, Prediction, and Search. (2nd ed.). Cambridge: MIT Press.

Thoemmes, F. (2011). Comparison of selected causality theories. Gesundheitswesen. 73(12), 880-883.

VanderWeele, T.J. (2012). The sufficient cause framework in statistics, philosophy and the biomedical and social sciences. En C. Berzuini, P. Dawid y L. Bernardinelli, (eds.) Causality: Statistical Perspectives and Applications, (pp. 180-191). Chichester: Wiley and Sons.



VanderWeele, T.J., y Hernán, M.A. (2006). From counterfactuals to sufficient component causes, and viceversa. European Journal of Epidemiology, 21, 855-858.

Winship, C., y Morgan, S. L. (1999). The estimation of causal effects from observational data. Annual Review of Sociology, 25, 659-706.


La base de datos está protegida por derechos de autor ©espanito.com 2016
enviar mensaje