This study provides support for the utility of the SenseCam to capture contextual features within the built environment that individuals may encounter during walking or cycling journeys. Although the current dataset was derived from a limited number of participants and geographic area, all hypothesised features of importance identified from the audit tools were identified from the images captured. The tendency to under-report journey duration is in contrast with previous SenseCam research [22, 32] and Global Positioning System (GPS) studies , possibly due the small convenience sample and focus on work related walking and cycling trips only. We found significant differences in the presence of specific features between walking and cycling modes, suggesting preliminary support for the content validity of this approach. For example, a significantly greater proportion of footpaths, pedestrians, and pedestrian crossings were found for walking trips, while a higher prevalence of car presence was found for cycling journeys. With the exception of cycle lanes, all significant differences between features identified by walking and cycling were in the expected direction. The lack of cycle lanes in the study areas may explain this finding somewhat, whereby many cycling journeys were completed on roads without cycle lanes. Improving on existing audits that do not reflect temporal exposure, use of the SenseCam data enabled the capture of factors that individuals actually encountered during active transport journeys, such as traffic density; weather conditions; presence of pedestrians, cyclists, and dogs; and temporary obstructions to walking or cycling.
Almost a quarter of data were lost due to images being too dark to enable coding of features. In part, this is likely due to the study being conducted during winter, with reduced daylight hours and work-related travel occurring in reduced light. In some instances participants may have intentionally or unintentionally worn the SenseCam with the lens facing inwards or worn an item of clothing over the SenseCam, which would also result in uncodeable images. Researchers may benefit from asking participants whether there were instances where the SenseCam lens was intentionally obscured to account for this. A high proportion of work-related journeys were omitted due to the use of motorised transport modes. Again, this may be due to the study being conducted during winter with weather conditions discouraging active transport modes. It is also likely that some participants resided in environments that were unsupportive for actively transporting to their workplaces, however we cannot establish this from the current investigation.
While employment of photography in health research is not a new concept, the use of wearable cameras to passively capture a series of images over specified times has only been possible in recent years. As we have observed with other emerging technologies in health behaviour research such as pedometry, accelerometry, and GPS, there is a significant amount of research that is first required to develop appropriate and agreed-upon data treatment methods. As noted earlier, this study was conducted with a small sample and was limited to two areas of Auckland, New Zealand, only. Our aim was not to provide a comprehensive framework for coding environmental features, but to provide proof of concept and baseline data for future active transport work across international sites. Research is now needed to determine criterion and predictive validity of SenseCam image coding of environmental features over a range of settings and situations (e.g., heavy traffic) and utilising the wide range of validated environmental audits available. SenseCam images can provide repeated measures of environmental variables encountered during journeys, which may differ by individual, and by journey duration, purpose, and mode. As such it is possible that some journeys or individuals may bias findings (e.g., due to having a greater number of repeated measures of one factor). Future research should thus consider accounting for clustering of environmental features both within journeys and individuals when investigating differences between environmental features encountered. Manual coding of the data was time consuming, taking approximately 25 researcher hours to process the 2292 images (equivalent to approximately 6.4 hours of journey time). Consequently, automated concept detection techniques need to be extended to identify environmental features of interest in future research with larger sample sizes . The wide range of kappa values found for inter-rater reliability across factors may denote issues with researcher interpretation (such as features being in ‘good condition’) or difficulties in clearly establishing features (due to photos being blurry for example). Future work to establish clear coding instructions and training protocols for researchers is thus required. Further research is also needed to consider more detailed built environment features than those presented here, for example types of pedestrian crossings, which may be especially important for vulnerable populations. Walking and cycling were the only travel modes examined in the current study; future research should consider the implications of differing travel modes and travel behaviours on image quality (e.g., running may result in blurry/uncodeable images), across a wider range of journey purposes and demographic groups.