In the long-term operation of highway infrastructure, timely monitoring and performance verification of maintenance tasks are integral. Some examples of these maintenance tasks are mowing the landscape of highway areas, detecting debris, patching unfilled holes in the pavement, and identifying and repairing road signs. This study will develop a framework that integrates Unmanned Aerial Vehicle (UAV) image acquisition with image segmentation methods to automate the tasks needed to effectively maintain highway infrastructure. Existing research for UAV-based environment monitoring has limitations in the number of datasets relevant for highway monitoring and did not comprehensively analyze the effect of changing flight parameters. To overcome these limitations, the proposed research investigates the effect of flight parameters on UAV semantic segmentation performance by considering images taken from varying UAV heights and both vertical and oblique camera angles. This research uses a deep neural network based on U-Net to automatically processes the images and segments them into different regions. Efficient training data annotation is also carried out by performing large-scale ground truth annotation through automatic co-labeling of images and point cloud data. Validation experiments were performed on a real highway dataset, showing that while the segmentation performance varies by 3-25% depending on the flight height, the performance only varies by 0.5% depending on the camera angle.