Summary: | It is known that welded joint is much “weaker” than base metal due to discontinuities of geometry, materials and residual stresses. It seems current international design rules do not adopt a uniform approach to weld efficiency, which is often defined as the ratio of the strength of a welded joint to the strength of base metal, in their guidance for creep and fatigue design of welds. This appears to be a great barrier for the application of nuclear welded structures which has a prolonged design lifetime of 60 years. In this work, fatigue strength reduction factor of a Cr-Ni-Mo-V steel welded joint, machined from welded steam turbine rotors for nuclear power plant, was investigated by performing axially push-pull cyclic loads tests with both cross-weld and pure base metal specimens up to very high cycle fatigue regime under ultrasonic frequency at ambient temperature. The effects of residual stress, strain localization, and microdefects in mismatched steels on failure mechanisms of welds were discussed thoroughly. Results show that fatigue strength reduction factor is varied in the range of 0.95-0.975, and is found to be dependent on fatigue lifetime for the first time. It is indicated that variation of fatigue strength reduction factor are associated with transition of crack initiation from specimen surface in high cycle fatigue regime to interior micro-defects in very high cycle fatigue regime. Comparing existing codes and standards for fatigue design of welds with experimental data indicates the over-conservativeness of present code-based design method. This implies a micro-defect based fatigue design approach is required for long life safe and reliability of weldments.
|