Slashdot has an article about nissan and microsoft trying to integrate their next technologies together. The nissans' new concept car would come equiped with an Xbox 360. When the car is parked, you can just bring up the flip LCD monitor and play the game 'Project Gotham Racing 3' by controling the real steering, accelerator etc. Hmm.. one question.. why use the LCD instead of trying to project something on the front glass??, I think it would be a really good experience for the gamer if there were some provision for projecting the video output onto the front glass.
"http://www.theregister.co.uk/2005/12/28/microsoft_nissan_urge/"
I have been looking at the review i got from NCC and was wondering.. does blender really have such good api support which would allow simple python scripts to create depthmaps etc. So just to clarify that and find out if any work has been done so far for using blender in Computer Vision. To my surprise found some work related to getting camera calibration for blender. The api was another shock to me.. it provided support for using opengl calls directly, creating widgets (buttons, dropdown boxes etc) and api for accessing the scene details as well. I included about this in the jgt paper and informed my prof about the same. Have to see how it goes...
"http://www.theregister.co.uk/2005/12/28/microsoft_nissan_urge/"
I have been looking at the review i got from NCC and was wondering.. does blender really have such good api support which would allow simple python scripts to create depthmaps etc. So just to clarify that and find out if any work has been done so far for using blender in Computer Vision. To my surprise found some work related to getting camera calibration for blender. The api was another shock to me.. it provided support for using opengl calls directly, creating widgets (buttons, dropdown boxes etc) and api for accessing the scene details as well. I included about this in the jgt paper and informed my prof about the same. Have to see how it goes...
Comments