Abstract
Conventional measurements of the degree of temperembrittlement suffered by a steel are made in terms of the shift of the notched-bar “fracture appearance” transition temperatures (FATT). Any fundamental explanation of temperembrittlement must be concerned with the lowering of grain-boundary cohesion by the segregation of impurity. elements to grain boundaries over a critical temperature range, typically 560–430° C. The use of transition shifts to estimate the kinetics of reduction of grain-boundary cohesion can be criticized on several counts. First, it is often the case that the unembrittled condition breaks by cleavage rather than by intergranular fracture at low temperatures, so that no measure of the unembrittled grain-boundary cohesion can be obtained. Secondly, even if both fractures are intergranular, the shift compares fracture events at different temperatures and there is no guarantee that the relative grain-boundary cohesive strengths do not change with temperature. Thirdly, the temper-brittle fracture processes at low temperatures are critically dependent, not only on grain-boundary properties but on the size and distribution of tempered carbides and on the yield strength of the matrix. Finally, though it may be possible to relate the “nil-ductility” transition temperature to the magnitude of the critical stress needed to propagate a crack nucleus along embrittled grain boundaries in the yielded region ahead of a notch, the more commonly used FATT embraces, by definition, both intergranular and ductile regions of fracture. If there were separate effects of impurity-element segregation on these two modes of fracture, e.g. by weakening carbide/ferrite matrix interface bonding in addition to weakening grain boundaries, it would prove to be almost impossible to extract any worthwhile quantitative information from the FATT.
Get full access to this article
View all access options for this article.
