AbediJ. (2012). Validity issues in designing accommodations. In FulcherG.DavidsonF. (Eds.), The Routledge handbook of language testing in a nutshell (pp. 48–62). New York, NY: Routledge.
2.
AbediJ.BayleyR. (2014). Improving language outcomes by distinguishing between low English proficiency and learning disabilities [Funding proposal]. Washington, DC: Institute of Education Sciences.
AbediJHeckmanP.HermanJ. (2010). The Formative Assessment Project. Arlington, VA: National Science Foundation.
5.
AbediJ.LeonS.KaoJ. (2008a). Examining differential distractor functioning in reading assessments for students with disabilities (CSE Report 743). Los Angeles: University of California, Center for the Study of Evaluation/National Center for Research on Evaluation, Standards, and Student Testing.
6.
AbediJ.LeonS.KaoJ. (2008b). Examining differential item functioning in reading assessments for students with disabilities (CSE Report 744). Los Angeles: University of California, Center for the Study of Evaluation/National Center for Research on Evaluation, Standards, and Student Testing.
7.
AbediJ.LordC. (2001). The language factor in mathematics tests. Applied Measurement in Education, 14, 219–234.
8.
AbediJ.LordC.HofstetterC.BakerE. (2000). Impact of accommodation strategies on English language learners’ test performance. Educational Measurement: Issues and Practice, 19(3), 16–26.
9.
AhmannJ. S.GlockM. D. (1975). Measuring and evaluating educational achievement. Boston, MA: Allyn & Bacon.
10.
AlbusD.ThurlowM. (2013). 2010-11 publicly reported assessment results for students with disabilities and ELLs with disabilities (Technical Report 68). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
11.
AlbusD. A.ThurlowM. L. (2007). English language learners with disabilities in state English language proficiency assessments: A review of state accommodation policies (Synthesis Report 66). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
12.
AlmondP.WinterP.CametoR.RussellM.SatoE.Clarke-MiduraJ.. . . LazarusS. (2010). Technology-enabled and universally designed assessment: Considering access in measuring the achievement of students with disabilities—a foundation for research. Journal of Technology, Learning, and Assessment, 10(5). Retrieved from http://ejournals.bc.edu/ojs/index.php/jtla/article/view/1605/1453
13.
AltmanJ. R.LazarusS. S.ThurlowM. L.QuenemoenR. F.CuthbertM.CormierD. C. (2008). 2007 survey of states. Minneapolis: University of Minnesota, National Center on Educational Outcomes.
14.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational & psychological tests. Washington, DC: American Psychological Association.
15.
BoltS. E.YsseldykeJ. E. (2006). Comparing DIF across math and reading/language arts tests for students receiving a read-aloud accommodation. Applied Measurement in Education, 19, 329–355.
16.
BowenS.FerrellK. (2003). Assessment in low-incidence disabilities: The day-to-day realities. Rural Special Education Quarterly, 22(4), 10–19.
17.
Breimhorst v. ETS (N.D. Cal, March27, 2001). Chapman/Kidd v. California Department of Education. Prelim Injunction, C 01-01780 CRB, n.d. CA (2002).
18.
Cahalan-LaitusisC.MorganD.BridgemanB.ZannaJ.StoneE. (2007). Examination of fatigue effects from extended time accommodations on the SAT Reasoning Test. New York, NY: College Board.
19.
CallahanR.GándaraP. (2004). On nobody’s agenda: Improving English-language learners’ access to higher education. In SadowskiM. (Ed.), Teaching immigrant and second-language students: Strategies for success (pp. 107–127). Cambridge, MA: Harvard Education Press.
20.
CarrT. G. (2008, March). Qualitative review of items that worked and didn’t work. Paper presented at the National Council of Measurement in Education annual meeting, New York, NY.
21.
CarrT. G.KoprivaR. J. (2009, April). It’s about time: Matching English Learners and the ways they take tests by using an online tool to properly address individual needs. Paper presented at the National Council of Measurement in Education annual meeting, San Diego, CA.
22.
Castañeda v. Pickard. 648 F. 2d 989 (5th Cir. 1981).
23.
CawthonS.LeppoR.CarrT. G.KoprivaR. J. (2013). Towards accessible assessments: The promises and limitations of test item adaptations for students with disabilities and English language learners. Educational Assessment, 18, 73–98.
24.
ChristensenL. L.AlbusD. A.LiuK. K.ThurlowM. L.KincaidA. (2013). Accommodations for students with disabilities on state English language proficiency assessments: A review of 2011 state policies. Minneapolis: University of Minnesota, Improving the Validity of Assessment Results for English Language Learners with Disabilities (IVARED).
25.
ChristensenL. L.BraamM.ScullinS.ThurlowM. L. (2011). 2009 state policies on assessment participation and accommodations for students with disabilities (Synthesis Report 83). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
26.
ClapperA. T.MorseA. B.ThompsonS. J.ThurlowM. L. (2005). Access assistants for state assessments: A study of state guidelines for scribes, readers, and sign language interpreters (Synthesis Report 58). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
27.
CookL.EignorD.SteinbergJ.SawakiY.ClineF. (2009). Using factor analysis to investigate the impact of accommodations on the scores of students with disabilities on a reading comprehension assessment. Journal of Applied Testing Technology, 10(2). Retrieved from http://www.jattjournal.com/index.php/atp/issue/view/4186
28.
CormierD. C.AltmanJ. R.ShyyanV.ThurlowM. L. (2010). A summary of the research on the effects of test accommodations: 2007-2008 (Technical Report 56). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
29.
CortiellaC. (2006). NCLB and IDEA: What parents of students with disabilities need to know and do. Minneapolis: University of Minnesota, National Center on Educational Outcomes.
30.
CortiellaC.KaloiL. (2009). Understanding the Americans With Disabilities Act amendments and Section 504 of the Rehabilitation Act. New York, NY: National Center for Learning Disabilities.
31.
Council of Chief State School Officers & Association of Test Publishers. (2010). Operational best practices for statewide large-scale assessment programs. Washington, DC: Author.
32.
CrawfordL. (2007). State testing accommodations: A look at their value and validity. New York, NY: National Center for Learning Disabilities.
33.
CronbachL. J.MeehlO. E. (1995). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302.
34.
DeStefanoL.ShrinerJ. G.LloydC. A. (2001). Teacher decision making in participation of students with disabilities in large-scale assessment. Exceptional Children, 68(1), 7–22.
35.
Disability Rights Advocates. (2001). Do no harm—High stakes testing and students with learning disabilities. Oakland, CA: Author.
36.
DolanR. P.BurlingK. S.HarmsM.BeckR.HannaE.JudeJ.. . . WayW. (2009). Universal design for computer-based testing guidelines. Iowa City, IA: Pearson.
37.
DolanR. P.HallT. E.BanerjeeM.ChunE.StrangmanN. (2005). Applying principles of universal design to test delivery: The effect of computer-based read-aloud on test performance of high school students with learning disabilities. Journal of Technology, Learning, and Assessment, 3(7), 1–31.
38.
DouglasK. (2004). Teacher ideas on teaching and testing English language learners: Summary of focus group discussions (Report No. 219). College Park: University of Maryland, Center for the Study of Assessment Validity and Evaluation.
39.
ElliottJ. L.ThurlowM. L. (2006). Improving test performance of students with disabilities on district and state assessments (2nd ed.). Thousand Oaks, CA: Corwin.
40.
EmickJ.KoprivaR. J. (2007, April). The validity of large-scale assessment scores for ELLs under optimal testing conditions: Does validity vary by language proficiency?Presentation at the American Educational Research Association annual meeting, Chicago, IL.
41.
ErcikanK.RothW. M.SimonM.SandilandsD.Lyons-ThomasJ (in press). Tests fair for all linguistic minority students? Validity and fairness of measurements for diverse linguistic minority students. Applied Measurement in Education.
42.
ErdoganY. (2008). Paper-based and computer-based concept mapping: The effects on computer achievement, computer anxiety and computer attitude. British Journal of Educational Technology, 40, 821–836.
43.
FieldsR. (2008). Inclusion of special populations in the national assessment: A review of relevant laws and regulations. Washington, DC: National Assessment Governing Board.
44.
FinchH.BartonK.MeyerP. (2009). Differential item functioning analysis for accommodated versus non-accommodated students. Educational Assessment, 14(1), 38–56.
45.
FletcherJ. M.FrancisD. J.BoudousquieA.CopelandK.YoungV.KalinowskiS.VaughnS. (2006). Effects of accommodations on high-stakes testing for students with reading disabilities. Exceptional Children, 72, 136–150.
46.
FletcherJ. M.FrancisD. J.O’MalleyK.CopelandK.MehtaP.CaldwellC. J.. . . VaughnS. (2009). Effects of a bundled accommodations package on high-stakes testing for middle school students with reading disabilities. Exceptional Children, 75, 447–463.
47.
FuchsL. S.FuchsD. (1999). Fair and unfair testing accommodations. School Administrator, 56(10), 24–29.
48.
GeeJ. P. (2007). What video games have to teach us about learning and literacy (2nd ed.). New York, NY: Palgrave Macmillan.
49.
GeeJ. P. (2008). Good video games + good learning: Collected essays on video games, learning, and literacy. New York, NY: Peter Lang.
GoertzM. (2007, June). Standards-based reform: Lessons from the past, directions for the future. Paper presented at Clio at the Table: A Conference on the Uses of History to Inform and Improve Education Policy, Brown University, Providence, RI. Retrieved from http://www7.nationalacademies.org/cfe/Goertz%20Paper.pdf
52.
GreggN. (2009). Adolescents and adults with learning disabilities and ADHD: Assessment and accommodation. New York, NY: Guilford.
53.
GriceH. P. (1975). Logic and conversation. In ColeP.MorganJ. L. (Eds.), Syntax and semantics: Vol. 3. Speech acts (pp. 41–58). New York, NY: Academic Publishers.
54.
GulliksenH. (1950). Theory of mental tests. New York, NY: Wiley.
55.
HaladynaR. M.DowningS. M. (2004). Construct-irrelevant variance in high-stakes testing. Educational Measurement: Issues and Practice, 23(1), 17–27.
56.
HeritageM.HeritageJ. (2013). Teacher questioning: The epicenter of instruction and assessment. New York, NY: Routledge.
57.
HeubertJ. P.HauserR. M. (1999). High stakes: Testing for tracking, promotion and graduation. Washington, DC: National Academies Press.
58.
HodgsonJ. R.LazarusS. S.PriceL.AltmanJ. R.ThurlowM. L. (2012). Test administrators’ perspectives on the use of the read aloud accommodation on state tests for accountability (Technical Report No. 66). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
59.
JanzenJ. (2008). Teaching English language learners in the content areas. Review of Educational Research, 78, 1010–1038.
60.
JohnstoneC. J.AltmanJ. R.ThurlowM. L. (2006). A state guide to the development of universally designed assessments. Minneapolis: University of Minnesota, National Center on Educational Outcomes.
61.
JohnstoneC. J.AltmanJ.ThurlowM. L.ThompsonS. J. (2006). A summary of research on the effects of test accommodations: 2002 through 2004 (Technical Report No. 45). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
62.
JohnstoneC.LiuK.AltmanJ.ThurlowM. (2007). Student think aloud reflections on comprehensible and readable assessment items: Perspectives on what does and does not make an item readable (Technical Report No. 48). Minneapolis: National Center on Educational Outcomes.
63.
JohnstoneC. J.ThompsonS. J.MillerN. A.ThurlowM. L. (2008). Universal design and multi-method approaches to item review. Educational Measurement: Issues and Practice, 27(1), 25–36.
64.
Kamei-HannanC. (2008). Examining the accessibility of a computerized adapted test using assistive technology. Journal of Visual Impairment and Blindness, 102, 261–271.
65.
KatoK.MoenR.ThurlowM. (2009). Differentials of a state reading assessment: Item functioning, distractor functioning, and omission frequency for disability categories. Educational Measurement: Issues and Practice, 28(2), 28–40.
66.
KearnsJ. F.Towles-ReevesE.KleinertH. L.KleinertJ. O.Kleine-KrachtM. (2011). Characteristics of and implications for students participating in alternate assessments based on alternate academic achievement standards. Journal of Special Education, 45(3), 3–14. doi:10.1177/0022466909344223.
67.
Ketterlin-GellerL. R. (2005). Knowing what all students know: Procedures for developing universal design for assessment. Journal of Technology, Learning and Assessment, 4(2), 1–23.
68.
Ketterlin-GellerL. R. (2008). Testing students with special needs: A model for understanding the interaction between assessment and student characteristics in a universally designed environment. Educational Measurement: Issues and Practice, 27(3), 3–16.
69.
Ketterlin-GellerL. R.AlonzoJ.Braun-MoneganJ.TindalG. (2007). Recommendations for accommodations: Implications of (in)consistency. Remedial and Special Education, 28, 194–206.
70.
KettlerR. J. (2012). Testing accommodations: Theory and research to inform practice. International Journal of Disability, Development and Education, 59(1), 53–66.
71.
KiefferM. J.LesauxN. K.RiveraM.FrancisD. J. (2009). Accommodations for English language learners taking large-scale assessments: A meta-analysis on effectiveness and validity. Review of Educational Research, 79, 1168–1201.
72.
KimD. H.SchneiderC.SiskindT. (2009). Examining the underlying factor structure of a statewide science test under oral and standard administrations. Journal of Psychoeducational Assessment, 27, 323–333.
73.
KingstonN. M. (2009). Comparability of computer- and paper-administered multiple-choice tests for K-12 populations: A synthesis. Applied Measurement in Education, 22(1), 22–27.
74.
KitmittoS.Bandeira de MelloV. (2008). Measuring the status and change of NAEP state inclusion rates for students with disabilities (NCES 2009-453). Washington, DC: National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education.
75.
KoenigJ. A.BachmanL. F. (Eds.). (2004). Keeping score for all: The effects of inclusion and accommodation policies on large-scale educational assessments. Washington, DC: National Academies Press.
76.
KoprivaR.J. (1999). Making state tests inclusive for special populations: Training guidelines for developing and implementing Inclusive Title 1 Assessments. Washington, DC: Council of Chief State School Officers.
77.
KoprivaR.J. (2000). Ensuring accuracy in testing for English language learners: A practical guide for assessment development. Washington, DC: Council of Chief State School Officers.
78.
KoprivaR. J. (2008a). Access-based item development. In Improving testing for English language learners (pp. 279–318). New York, NY: Routledge.
79.
KoprivaR. J. (2008b). Changing demographics in a testing culture: Why this issue matters. In Improving testing for English language learners: A comprehensive approach to designing, building, implementing, and interpreting better academic assessments (pp. 13–36). New York, NY: Routledge.
80.
KoprivaR. J. (2008c). Improving testing for English language learners: A comprehensive approach to designing, building, implementing, and interpreting better academic assessments. New York, NY: Routledge.
81.
KoprivaR. J. (2008d). Providing the foundation of principled test construction: Maintaining the integrity of the item targets. In Improving testing for English language learners (pp. 279–318). New York, NY: Routledge.
82.
KoprivaR. J. (2014). Technology-interactive classroom-embedded modules for measuring challenging math and science skills of ELs [Research project]. Madison: University of Wisconsin.
83.
KoprivaR. J.EmickJ. E.Hipolito-DelgadoC. P.CameronC. A. (2007). Do proper accommodation assignments make a difference? Examining the impact of improving decision making on scores for English Language Learners. Educational Measurement: Issues and Practice, 26(3), 11–20.
KoprivaR. J.KoranJ. (2008). Proper assignment of accommodations to individual students. In Improving testing for English language learners: A comprehensive approach to designing, building, implementing, and interpreting better academic assessments (pp. 221–258). New York, NY: Routledge.
86.
KoprivaR. J.LaraJ. (2009). Looking back and looking forward: Inclusion of all students in NAEP, U.S.’s National Assessment of Educational Progress (Commissioned paper). Washington, DC: National Assessment Governing Board.
87.
KoprivaR. J.TriscariR.CarrT. G. (2014). Exploring a technology-based, multi-semiotic methodology for measuring challenging content knowledge and skills of low English proficient students. Manuscript submitted for publication.
KoranJ.KoprivaR. J.EmickJ.MonroeJ. R.GaravagliaD. (2006, April). Teacher and multi-source computerized approaches for making individualized test accommodation decisions for English language learners. Paper presented at the annual meeting of the National Council of Measurement in Education, San Francisco, CA.
90.
KoretzD. M.HamiltonL. (2000). Assessing students with disabilities in Kentucky: Inclusion, student performance, and validity. Educational Evaluation and Policy Analysis, 22, 255–272.
91.
LaingJ.FarmerM. (1984). Use of the ACT assessment by examinees with disabilities (Research Report No. 84). Iowa City, IA: American College Testing Program.
92.
LaitusisC. C. (2007). Research designs and analysis for studying accommodations on assessments. In LaitusisC. C.CookL. L. (Eds.), Large-scale assessment and accommodations: What works? (pp. 67–79). Arlington, VA: Council for Exceptional Children.
93.
LaitusisC. C. (2010). Examining the impact of audio presentation on tests of reading comprehension. Applied Measurement in Education, 23, 153–167.
LaraJ.AugustD. (1996). Systemic reform and limited English proficient students. Washington, DC: Council of Chief State School Officers.
96.
Lau v. Nichols. 414 U.S. 563 (1974).
97.
LazarusS. S.ThurlowM. L.LailK. E.ChristensenL. (2009). A longitudinal analysis of state accommodations policies: Twelve years of change 1993-2005. Journal of Special Education, 43(2), 67–80.
98.
LeeO.Maerten-RiveraJ.PenfieldR. D.LeRoyK.SecadaW. G. (2008). Science achievement of English language learners in urban elementary schools: Results of a first-year professional development intervention. Journal of Research in Science Teaching, 45(1), 31–52.
99.
LeeO.QuinnH.ValdésG. (2013). Science and language for English language learners in relation to Next Generation Science Standards and with implication for Common Core State Standards for English language arts and mathematics. Educational Researcher, 42, 223–233.
100.
LiuK. K.GoldstoneL. S.ThurlowM. L.WardJ. M.HattenJ.ChristensenL. L. (2013). Voices from the field: Making state assessment decisions for English language learners with disabilities. Minneapolis: University of Minnesota, Improving the Validity of Assessment Results for English Language Learners with Disabilities (IVARED).
101.
MadausG.RussellM.HigginsJ. (2009). The paradoxes of high stakes testing: How they affect students, their parents, teachers, principals, schools, and society. Charlotte, NC: Information.
102.
MazzeoJ.CarlsonJ. E.VoeklK. E.LutkusA. D. (2000). Increasing the participation of special needs students in NAEP (NCES 2000-473). Washington, DC: U.S. Department of Education, Office of Educational Research and Improvement.
103.
McDonnellL. M.McLaughlinM. J.MorisonP. (Eds.). (1997). Educating one & all: Students with disabilities and standards-based reform. Washington, DC: National Academies Press.
104.
McGrewK. S.AlgozzineB.YsseldykeJ. E.ThurlowM. L.SpiegelA. N. (1995). The identification of individuals with disabilities in national databases: Creating a failure to communicate. Journal of Special Education, 28, 472–487.
105.
McGrewK. S.ThurlowM. L.SpiegelA. N. (1993). An investigation of the exclusion of students with disabilities in national data collection programs. Educational Evaluation and Policy Analysis, 15, 339–352.
106.
McKevittB.MarquartA.MrochA.SchulteA. G.ElliottS. N.KratochwillT. R. (2000, March). Understanding the effects of testing accommodations: A single case approach. Paper presented at the annual meeting of the Council of Chief State School Officers, Snowbird, UT.
107.
MessickS. (1989). Validity. In LinnR. L. (Ed.), Educational measurement (3rd ed., pp. 13–103). Washington, DC: American Council on Education and National Council on Measurement in Education.
108.
MeyenE.PoggioJ.SeokS.SmithS. (2006). Equity for students with high-incidence disabilities in statewide assessments: A technology-based solution. Focus on Exceptional Children, 38(7), 1–8.
109.
MislevyR. J. (1994) Evidence and inference in educational assessment. Psychometrika, 58, 79–85.
110.
MislevyR. J.SteinbergL. S.AlmondR. G. (2003). On the structure of educational assessments. Measurements: Interdisciplinary Research and Perspectives, 1(1), 3–62.
111.
National Assessment Governing Board. (2005). NAEP mathematics assessment and item specifications. Washington, DC: Author.
112.
National Center for Education Statistics. (2013). A first look: 2013 mathematics and reading (NCES 2014-451). Washington, DC: U.S. Department of Education, Institute of Education Sciences. Retrieved from http://nationsreportcard.gov/reading_math_2013
113.
National Center on Educational Outcomes. (2011a). Don’t forget accommodations! Five questions to ask when moving to technology-based assessments (NCEO Brief No. 1). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
114.
National Center on Educational Outcomes. (2011b). Understanding subgroups in common state assessments: Special education students and ELLs (NCEO Brief No. 4). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
115.
National Council on Measurement in Education. (2012). Testing and data integrity in the administration of statewide student assessment programs. Washington, DC: Author.
116.
National Governors’ Association. (1986). Time for results: The governors’ 1991 report. Washington, DC: Author.
117.
Office of Civil Rights. (2000). The use of tests as part of high stakes decision-making for students: A resource guide for educators and policy makers. Washington, DC: U.S. Department of Education.
118.
OliveriM. E.ErcikanK.ZumboB. D. (2014). Effects of population heterogeneity on accuracy of DIF detection. Applied Measurement in Education, 27, 286–300.
119.
Pennock-RomanM.RiveraC. (2011). Mean effects of test accommodations for ELLs and non-ELLs: A meta-analysis of experimental studies. Educational Measurement: Issues and Practice, 30, 10–18.
120.
PhillipsS. E. (1994). High-stakes testing accommodations: validity versus disabled rights. Applied Measurement in Education, 7, 93–120.
121.
PitoniakM. J.RoyerJ. M. (2001). Testing accommodations for examinees with disabilities: A review of psychometric, legal, and social policy issues. Review of Educational Research, 71, 53–104.
122.
PophamW. J. (1994). The instructional consequences of criterion-referenced clarity. Educational Measurement: Issues and Practice, 13, 15–18.
123.
PowersS.Strain-SeymourE. (2013, April). Integrating research paradigms to provide validity evidence for next-generation English language learner assessments. Presentation at the American Education Research Association Annual Meeting, San Francisco, CA.
124.
QuenemoenR.ThurlowM.MoenR.ThompsonS.MorseA. B. (2003). Progress monitoring in an inclusive standards-based assessment and accountability system (Synthesis Report No. 53). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
125.
Rhode Island Department of Education. (2003). Rhode Island assessment accommodation study: Research summary. Minneapolis: University of Minnesota, National Center on Educational Outcomes.
126.
RiekeR.LazarusS. S.ThurlowM. L.DominguezL. M. (2013). 2012 Survey of states: Successes and challenges during a time of change. Minneapolis: University of Minnesota, National Center on Educational Outcomes.
127.
RigneyS.PettitM. (1995, April). Criteria for producing equivalent scores on portfolio assessments: Vermont’s approach. Presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
128.
RigneyS.WileyD. E.KoprivaR. J. (2008). The past as preparation: Measurement, public policy and implications for access. In KoprivaR. J. (Ed.), Improving testing for English Language learners: A comprehensive approach to designing, building, implementing, and interpreting better academic assessments (pp. 37–64). New York, NY: Routledge.
129.
RiveraC.CollumE. (2004). An analysis of state assessment policies addressing the accommodation of English language learners (Issue paper) Arlington, VA: National Assessment Governing Board, Center for Equity and Excellence in Education, The George Washington University.
130.
RiveraC.CollumE. (Eds.). (2006). A national review of state assessment policy and practice for English language learners. Mahwah, NJ: Erlbaum.
131.
RiveraC.CollumE.WillnerL. S.SiaJ. K.Jr. (2006). An analysis of state assessment policies addressing the accommodation of English language learners. In RiveraC.CollumE. (Eds.), A national review of state assessment policy and practice for English language learners (pp. 1–173). Mahwah, NJ: Erlbaum.
132.
RiveraC.StansfieldC. (2000). An analysis of state policies for the inclusion and accommodation of English language learners in state assessment programs during 1998–1999 (Executive summary). Washington, DC: Center for Equity and Excellence in Education, The George Washington University.
133.
RogersC. M.ChristianE. M.ThurlowM. L. (2012). A summary of the research on the effects of test accommodations: 2009-2010 (Technical Report No. 65). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
134.
RoseD.MeyerA. (2002). Teaching every student in the digital age: Universal design for learning. Alexandria, VA: ASCD.
135.
RothW.-M.OliveriM. E.SandilandsD.Lyons-ThomasJ.ErcikanK. (2013). Investigating sources of differential item functioning using expert think-aloud protocols. International Journal of Science Education, 35, 546–576.
RussellM. (2011). Digital test delivery: Empowering accessible test design to increase test validity for all students. Washington, DC: Arabella Advisors.
138.
RussellM.HoffmanR.HigginsJ. (2009a). Meeting the needs of all students: A universal design approach to computer-based testing. Innovate: Journal of Online Education, 5(4).
139.
RussellM.HoffmanR.HigginsJ. (2009b). Nimble Tools: A universally designed test delivery system. Teaching Exceptional Children, 42(2), 6–12.
140.
SchleppegrellM. J. (2004). The language of schooling: A functional linguistics perspective. Mahwah, NJ: Erlbaum.
141.
ShepardL. A. (2001). The role of classroom assessment in teaching and learning. In RichardsonV. (Ed.), The handbook of research on teaching (4th ed., pp. 1066–1101). Washington, DC: American Educational Research Association.
ShrinerJ. G.DeStefanoL. (2003). Participation and accommodation in state assessment: The role of Individualized Education Programs. Exceptional Children, 69, 147–161.
144.
ShrinerJ. G.ThurlowM. L. (1993). State special education outcomes 1992. Minneapolis: University of Minnesota, National Center on Educational Outcomes.
145.
SireciS. G.LiS.ScarpatiS. (2003). The effects of test accommodations on test performance: A review of the literature (Center for Educational Assessment Research Report No. 485) Amherst: School of Education, University of Massachusetts.
146.
SireciS. G.ScarpatiS. E.LiS. (2005). Test accommodations for students with disabilities: An analysis of the interaction hypothesis. Review of Educational Research, 75, 457–490.
147.
SireciS. G.WellsC. (2010). Evaluating the comparability of English and Spanish video accommodations for English language learners. In WinterP. C. (Ed.), Evaluating the comparability of scores from educational achievement test variations (pp. 33–68). Washington, DC: Council of Chief State School Officers.
148.
Solano-FloresG. (2006). Language, dialect, and register: Sociolinguistics and the estimation of measurement error in the testing of English-language learners. Teachers College Record, 108, 2354–2379.
149.
Solano-FloresG. (in press). Simultaneous test development. In ReynoldsC. R.KamphausR. W.DiStefanoC. (Eds.), Encyclopedia of psychological and educational testing: Clinical and psychoeducational applications. New York, NY: Oxford University Press.
150.
Solano-FloresG. (2014). Probabilistic approaches to examining linguistic features of test items and their effect on the performance of English language learners. Applied Measurement in Education, 27, 236–247.
151.
Solano-FloresG.GustafsonM. (2012). Assessment of English language learners: A critical, probabilistic, systemic view. In SimonM.ErcikanK.RousseauM. (Eds.), Improving large scale assessment in education: Theory, issues, and practice (pp. 87–109). New York, NY: Routledge.
152.
Solano-FloresG.LiM. (2009). Language variation and score variation in the testing of English language learners, native Spanish speakers. Educational Assessment, 14, 1–15.
153.
ThompsonS.BlountA.ThurlowM. (2002). A summary of research on the effects of test accommodations: 1999 through 2001 (Technical Report No. 34). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
154.
ThompsonS. J.QuenemoenR.ThurlowM. L. (2006). Factors to consider in the design of inclusive online assessments. In HrickoM. (Ed.), Online assessment and measurement: Foundations and challenges (pp. 102–117). Hershey, PA: Information Science.
155.
ThompsonS. J.ThurlowM. L.MaloufD. (2004). Creating better tests for everyone through universally designed assessments. Journal of Applied Testing Technology, 10(2). Retrieved from http://www.jattjournal.com/index.php/atp/article/view/48341
156.
ThurlowM. L. (2012). Students with disabilities, testing accommodations. In BanksJ. A. (Ed.), Encyclopedia of diversity in education (pp. 2090–2092). Thousand Oaks, CA: Sage.
ThurlowM. L.HouseA.BoysC.ScottD.YsseldykeJ. (2000). State participation and accommodations policies for students with disabilities: 1999 update (Synthesis Report No. 33). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
159.
ThurlowM. L.JohnstoneC.Ketterlin-GellerL. (2008). Universal design of assessment. In BurgstahlerS.CoryR. (Eds.), Universal design in post-secondary education: From principles to practice (pp. 73–81). Cambridge, MA: Harvard Education Press.
160.
ThurlowM. L.JohnstoneC.ThompsonS.CaseB. (2008). Using universal design research and perspectives to increase the validity of scores on large-scale assessments. In JohnsonR. C.MitchellR. E. (Eds.), Testing deaf students in an age of accountability (pp. 63–75). Washington, DC: Gallaudet University Press.
161.
ThurlowM. L.LaitusisC. C.DillonD. R.CookL. L.MoenR. E.AbediJ.O’BrienD. G. (2009). Accessibility principles for reading assessments. Minneapolis, MN: National Accessible Reading Assessment Projects.
162.
ThurlowM. L.LazarusS. (2009, April). Accommodations for all testing: Curriculum based, formative, district, and state (Presession Workshop No. 17). Council for Exceptional Children, Seattle, WA.
163.
ThurlowM. L.LazarusS. S.ChristensenL. L. (2008). Role of assessment accommodations in accountability. Perspectives, 45(4), 17–20.
164.
ThurlowM.LazarusS. S.AlbusD.HodgsonJ. (2010). Computer-based testing: Practices and considerations (Synthesis Report No. 78). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
165.
ThurlowM. L.LazarusS. S.ChristensenL. L. (2013). Accommodations and modifications for assessment. In CookB.TankersleyM. (Eds.), Effective practices in special education (pp. 311–327). Iowa City, IA: Pearson.
166.
ThurlowM. L.LazarusS.ThompsonS.RobeyJ. (2002). 2001 state policies on assessment participation and accommodations (Synthesis Report No. 46). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
167.
ThurlowM. L.LiuK.WardJ.ChristensenL. (2013). Assessment principles and guidelines for ELLs with disabilities. Minneapolis: University of Minnesota, Improving the Validity of Assessment Results for English Language Learners with Disabilities (IVARED).
168.
ThurlowM. L.MoenR. E.LekwaA. J.ScullinS. B. (2010). Examination of a reading pen as a partial auditory accommodation for reading assessment. Minneapolis: University of Minnesota, Partnership for Accessible Reading Assessment.
169.
ThurlowM. L.QuenemoenR. F.LazarusS. S. (2011). Meeting the needs of special education students: Recommendations for the Race to the Top consortia and states. Washington, DC: Arabella Advisors.
170.
ThurlowM. L.QuenemoenR. F.LazarusS. S.MoenR. E.JohnstoneC. J.LiuK. K.. . . AltmanJ. (2008). A principled approach to accountability assessments for students with disabilities (Synthesis Report No. 70). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
171.
ThurlowM. L.ScottD.YsseldykeJ. E. (1995). A compilation of states’ guidelines for accommodations in assessment for students with disabilities (Synthesis Report No. 18). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
172.
ThurlowM. L.SeyfarthA. L.ScottD. L.YsseldykeJ. E. (1997). State assessment policies on participation and accommodations for students with disabilities: 1997 update (Synthesis Report No. 29). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
173.
ThurlowM. L.YsseldykeJ. E.SilversteinB. (1993). Testing accommodations for students with disabilities: A review of the literature (Synthesis Report No. 4). Minneapolis: University of Minnesota, National Center on Educational Outcomes.
174.
ThurlowM. L.YsseldykeJ. E.SilversteinB. (1995). Testing accommodations for students with disabilities. Remedial and Special Education, 16, 260–270.
175.
TindalG.FuchsL. (2000). A summary of research on test changes: An empirical basis for defining accommodations. Lexington, KY: Mid-South Regional Resource Center. (ERIC Document Reproduction Service No. ED 442245)
176.
Towles-ReevesE.KearnsJ.KleinertH.KleinertJ. (2009). An analysis of the learning characteristics of students taking alternate assessments based on alternate achievement standards. Journal of Special Education, 42, 241–254.
177.
U.S. Government Accountability Office. (2011). Higher education and disability: Improved federal enforcement needed to better protect students’ rights to testing accommodations (GAO-12–40). Washington, DC: Author.
178.
VangM.ThurlowM. (2013). 2010-2011 APR snapshot #4: State assessment participation and performance of students receiving special education services. Minneapolis: University of Minnesota, National Center on Educational Outcomes.
179.
VaughnS.MartinezL. R.Linan-ThompsonS.ReutebachC. K.CarlsonC. D.FrancisD. J. (2009). Enhancing social studies vocabulary and comprehension for seventh-grade English language learners: Findings from two experimental studies. Journal of Research on Educational Effectiveness, 2, 297–324.
180.
WileyD. E.HaertelE. (1995). Extended assessment tasks: Purposes, definition, scoring and accuracy. In MitchellR. (Ed.), Implementing performance assessment: Promises, problems and challenges (pp. 61–90). Hillsdale, NJ: Lawrence Erlbaum.
WillnerL. S. (in press). CSAI Brief: Increasing student access to the language of the more cognitively challenging college and career-ready standards. San Francisco, CA: Center on Standards and Assessment Implementation.
WrightL. J.Logan-TerryA. (2014). Multimodality and measurement: A discourse analysis of English learners’ interactions with traditional and multisemiotic test tasks. Manuscript submitted for publication.
185.
WrightL. J.Staehr-FennerD.MoxleyK.KoprivaR. J.CarrT. G. (2013). Exploring how diverse learners interact with computerized, multi-semiotic representations of meaning: Highlights from cognitive labs conducted with ONPAR end-of-course biology and chemistry assessment tasks. Retrieved from http://www.iiassessment.wceruw.org
186.
ZeniskyA. L.SireciS. G. (2007). A summary of the research on the effects of test accommodations: 2005-2006 (Technical Report No. 47). Minneapolis: University of Minnesota, National Center on Educational Outcomes.