Range Finder App
An app that accurately measures distance from your smart phone/tablet. Something like this already exists (Smartmeasure), however it’s clunky, troublesome, and obscure. I’m aiming to have this app be used naturally amongst anybody who constantly needs to measure
Goals:
Create a work-around a focused laser beam, which is traditionally used in range finders. that a phone can't provide, you would be unable to get any sort of meaningful reading. Perhaps an algorithm that compares the scale of items it finds in the camera to a stored set, and give approximations.
Use the phone’s sound capabilities to measure distance
Use the phone’s map/compass capabilities to measure distance
Use a phone's accelerometer to track the phones's movement, and take a series of image with the same camera as you point the phone towards a target to measure distance
Hey Zabrak999, this is really a wonderful idea but I haven't seen such an app till date. You can contact some Android developers to get such an app developed for you.
Related
Hello together,
I`m writing currently my master thesis with the topic "Context Simulator For Mobile Business Application". The goal is, to test how an Android application reacts during changing context conditions: How reacts an application, if the battery is almost empty? How reacts an application, if internet connection breaks down during data transmission? How reacts an application, if a SD-Card is available/not available? ...
I want to simulate all of these factors on the PC and send the data to my android device. Some more examples:
- Simulating sensor data for accelerometer, gyroscope, ...
- GPS
- Camera and microphone (if an application requests a camera image, it should receive a image from my simulator)
- Fake connection for Wi-Fi, HDSPA, EDGE
- Fake time, time zone and date
- Simulate a specific battery level
- Fake calendar entries
------------------ My approaches ------------------
No 1:
Extend an existing custom rom with my features => Some calls should not transfer to OS (example: GPS) but to my simulator on PC. Also send data (example: battery level) to android OS. For example to pretend a low battery level.
No 2:
Write my own sandbox application (I haven`t found information to this topic so far). In this sandbox application, I`m going to start my application to test. So it is possible, to fetch all request from this System under test and I can decide if I want to transfer them to Android OS or to my simulator.
No 3:
Develop my own library, which will be included from my system under test. This library extends some android classes (e.g. Activity, Location Manager, Sensor Manager). My extensions classes will transmit the request to my simulator instead to the OS.
I`m afraid, I only have limited functionalities when I`m using this approach.
No 4:
Take sensor simulator from open intents as basic and extend it as good as possible.
-------- About Me --------
I only have few experience in Android development, but a lot experience in Java development. I know, I should read now a lot about custom roms, ... Unfortunately this thesis should be finish at the end of march.
------- What I want from you -------
Advice. I hope you understand my problem. Which is the best way to realize this project? I would like to have as much functionalities as possible. My prototype doesn`t need to support all context factors, but I should consider all factors in my system design.
I wanted to attached two graphics, but unfortunately I`m not allowed to. These are two possibilities and I`m not sure, which one is better (and also, if they are possible):
http ://s7.directupload.net/images/131212/bnpuo8gh.png
http ://s7.directupload.net/images/131212/e7u8dv4r.png
Thanks a lot,
Michael
Hello all,
I am low vision legally blind and I am looking for a programmer to create an application. Basically a camera application that can be controlled via the devices hardware buttons, Bluetooth mouse and voice command.
Basically this NuEyes running as an application on the ODG R7 Smartglasses rather than the OS overlay (i think that's the term) that is currently used.
EDIT 20150627: I thought I would add a little more information in here to try and make my stumbling rambles clear. What I am basically looking for is something with the features of Visor that can be controlled using the unused option buttons on the R7's finger mouse and optimally respond to basic voice commands such as "change filter" to rotate through the 3 visual settigns as well as "increase" and "decrease" for magnification. I really love the A Better Camera style interface and would love to include a night vision setting like the one it has as this would be very useful in low light situations to help my night blindness. Please let me know if this is possible, I really need the help coming up with an alternate solution to the extremely expensive NuEyes software package. /EDIT
Being a complete novice I would like to know what would be involved in this application. What would the programmer require from ODG, framework, sample code, ext, and what kind of costs would I be looking at for the initial application and maintenance.
The reason I am asking is though I believe companies have a right to make profits and recoup R&D costs is that the NuEyes system costs about $6000, that works about to about $2K wholesale for the glasses and $4K for the software. I"m just so frustrated by how much every piece of adaptive tech for the blind costs. Every time one of us needs a piece of tech we have to sell our souls to the devil to get it.
If this isn't the correct forum to post this request in please let me know of a more appropriate location. Thank you.
Updated with clarification of project goal.
Apologies if this is in the wrong section, mods please move if need be.
I am using a public app on the play store. I would like to perform an operations similar to that one might do when unit testing UI elements.
The app consists of mainly a list view of rectangular cards arranged in a vertical orientation. The app allows me to accept batches (orders) in which I perform deliveries for payment. There are also other users who use this same app so competition is fierce. If I do not accept a lucrative batch as fast as possible, someone else will. Even then its possible due to latency or simply bad luck that I do not get the batch. It is also bad because I tend to stare at my screen a lot while driving
I had an idea. I'm seeking some solution similar to the Robot class in Java, with the exception that the app be able to analyze the contents of the List View (which are View Groups composed of TextViews).
I was able to partially emulate what I want using UIAutomator, but it is a cumbersome solution because it requires ADB to run everytime. Not only that the swiping function on the UIDevice object in UIAutomator does not work on this particular app.
I have heard there are better utilities that can accomplish this. I have root on my phone.
Any advise?
I'd like to develop an application for a smart watch which will periodically turn on & off sensors on the watch/band, ala microphone/camera/etc, record their activity, do minimal processing and store the result on the watch/band. This result will be uploaded to the mobile phone when a bluetooth connection is established.
To my best understanding, this can only be done with Tizen/WearOS/FitbitOS, not with other watch operating systems such as Huami's ones (Amazfit/Xiami MI/etc). This also means that it's a (big) watch and not a (small) band that is suitable for the above.
Is this correct and how to do this otherwise if it's wrong?
How much of a hassle it is to do this in Tizen for a seasoned professional programmer who is a complete noob in Tizen/smartwatches/mobile devices?
Can we develop a headless app that runs constantly?
Are there any tutorials on the subject?
Hi all!
We just launched a new social media app on iOS and it's actually legit. Feedback welcome!
Since our official launch to the iOS app store, many of you have already joined us around the campfire to tell stories of your own. It’s been great seeing how creative the community is, and the general response is positive (4.8/5 avg. app store rating!). Even so, we’re still working hard to improve the platform and bring you the best experience possible.
Here’s what you can expect:
KEY UPDATES
Soundsuite:
SoundSuite lets you add a layer of audio behind your voice to make your captions even more engaging and immersive. Our idea was to help users create experiences within the caption, instead of limiting them to texts and emojis. This update expands Campfire’s core functionality in a way that truly captures the essence of Campfire, which is traditional storytelling meets modern technology. Everyone has that friend or family member that tells the best stories. They probably use descriptive language and animated gestures to convey the emotion behind the story. Our aim is to give you the option to emulate that energy by adding a layer of background audio that compliments the story being told. We’re really excited to see how creative you get with this new update.
Recording Limit Increase (20 to 30 seconds)
We heard the feedback from the community - Campfire users have a lot more to say. We want to give you ample time to get your point across, so we are increasing the recording limit to 30 seconds.
Adding Connections (Friends)
We know that sharing is caring, but sharing with friends is even better. With this update, you’ll be able to add people to your network and ensure that your stories appear on their feed.
In-App Search
This feature is pretty much essential to making your Campfire experience as seamless as possible. You’ll be able to locate whatever you’re looking for
In- App Photo Capture
You now have more options with photos you share on Campfire - upload from your camera roll, or, take the photo from within the app.
Stay tuned for more updates and feel free to try the app out and let me know what you think here!
Are you going to make a version for Android?