Hi Guys
Is there an official limit of vertex coordinates in openGL ES?
I am developing Games with openGL on android and found out, that on emulator and on some devices that each position component allows a maximum value of max-signed-short (-32.768 to 32.767)
If you place a quad outside of this range, it is not rendered. (Place the quad there, and the cam also -> no render)
Beside the Emulator I know that this issue also effects the HTC Wildfire. I am using libgdx in my games, but I don't think it's a matter of this lib.
Can anyone point me if there is a defined hardware limit for that, which devices are effected and or how can I check for it?
Hopefully anyone knows something about it - thanks in advance!
Related
Hi All, this is my first post here so here i come!
We're developing an app that uses video to texture spheres and render panoramas.
The way we did until now was to use ffmpeg to do software decoding of a video (our sources are really big, on full scale it's a 5700x2400 image).
The problem is that it's not really feasible, it's way too slow.
So we tried to hardware decode stuff on the gpu: but the problem is that the decoded data is not memory accessible: the accelerated methods in the mMediaPlayer class are only for playback, we cannot really manipulate data into a textured sphere.
We tried to work with api 14 (ICS), and there's a native method that's been added to the class that makes things go remarkably well even at highres:
Code:
mTexture = new SurfaceTexture(texId);
mTexture.setOnFrameAvailableListener(this);
Surface surface = new Surface(mTexture);
this.mMediaPlayer.setSurface(surface); // set Surface exists since API 14
Our problem is, while there's the homebrew scene and all, we really have to complete the job on 3.2
Do you reckon anything that could help us solving this annoying 3.2 limitation?
Maybe we've overlooked something. Or maybe there's some third party module that decodes using hw acceleration that we didn't know of (some obscure ffmeg fork maybe?)
We also thought about going through all the sourcecode from 3.2 and 4.0, deriving a class from upstream (where the two versions are similar) and ther rework our code backwards to implement that functionalityon 3.2 using 4.0 as a blueprint, but it seems as overkill as it gets.
And we don't have that much time left until the end of the project.
Thanks in advance to everybody that's reading and for the answers!
Max
Recently I have been trying to determine whether the atrix supports anti-aliasing in 3d applications. It seems that it does not, as I have yet to see it used in various apps. Also, the chainfire 3d driver says "MSAA not supported by your gpu" - so it seems like it does not. However, in the initial demo videos (before the atrix was released) of both NFS: Shift and Epic Citadel have antialiasing. Also in samurai II the graphics do appear to be antialiased - so this is why I am curious:
Does the atrix support antialiasing?
Tegra 2 doesn’t support MSAA. Apparently you can only use coverage sampling anti-aliasing (CSAA).
Huh, very interesting. CSAA is nvidia's sort of optimized version of MSAA, available through propriety APIs I assume? I am thinking that there is probably no way to make existing apps use CSAA - but is that true? It would be really cool to make an intermediary driver (something like ChainFire 3d) which would let apps utilize CSAA when they would normally on other devices be using MSAA - that would probably involve intercepting API calls and converting them...
Maybe it would be possible to make a plugin for chainfire 3d that would do that - it would be really cool to have tegras built in 5x CSAA work with any app!
Ideas anyone?
Sent from my MB860 using xda -developers app
Noticed that this got moved: my apologies mod for the wrong section - thank you!
Does anyone have any thoughts on a cf3d plugin? When ChainFire returns I intend to ask him if it is a conceivable idea or not.
This is sort of a research thread and I hope someone here is willing to weigh in with their knowledge.
I'm a Ruby / Java / Python / JS / PHP developer, who did a little bit of Android game development during my studies back in 2012. I assume things have changed since then.
I'm working on a commercial project where we need a network controllable video player for LED TV's and/or video projectors. Currently, we are using a Raspberry Pi 3-based design with the OMX Player, but this board is somewhat weak and the player is cumbersome to interact with and has limitations. Especially when it comes to rendering multiple layers with transparency. I would like to work on a platform where I have a rich multimedia API for rendering sound and video with an object-oriented API.
I have obtained myself an Asus Tinker, which has an official Android distribution. This runs rather smooth and from what I can tell, the API's for Android appear rich and flexible. So my questions are:
1) Is it possible to develop a launcher / kiosk app, that will allow me to boot into a "blank" screen and allow the app to place video surfaces, image surfaces and text layers? I should also be able to interact with the sound card and playback PCM audio. I would like an API that supports audio mixing, amplification, etc... There is no direct user input on the device, so I will need a solution that does not present any status bars, google account wizards, wifi wizards, update prompts, notifications or anything. In fact, when the Tinker is powered on, there should ideally not be anything indicating that it's Android.
I guess what I'm asking for is kind of a console video game engine / SDK, minus game controller support.
2) What kind of libraries or API's would I need to dive into and understand? Where should I start?
3) How complex is it? What is the scope of it? How much development time? Days? Weeks? Months? Years? Would I need more developers with specific skills?
4) Is there any developer here who's interested in participating in such a project as a paid freelance developer?
5) Is there any alternative software/OS platforms I should look into? I want to be able to boot into a custom passive user interface that is remotely controlled over REST by another device. I would like to avoid dealing with low level implementation of video decoding and rendering, but at the same time I would prefer to have control over screen resolution, refresh rate, color depth and I would like to run a ssh server on the client, so it can be serviced. Ideally, the platform should be able to both stream from the internet, but also accept commands to download to local storage and play from there.
6) Is there any alternative hardware platform I should look into?
7) Anything else I should consider? Problems that I'll need to address / prepare for?
Hi,
Im trying to get the Feed of my 4K Webcam into the Camera of an Android Emulator but i'm having trouble getting the full Resolution. I have tried many Emulators but they all seem to have some specific max camera resolution set. On nox and Genymotion I get a max camera resolution of 640x480px, On KoPlayer i got 1280x720p, in Android Virtual Device Manager (AVD) i got 1280x960px and in Memu which was the best one so far I got 1920x1080p. However that resolution doesnt suffice for my purposes. Does anyone know an Emulator which supports higher virtual camera resolutions? Or how to create a Virtual Device in AVD which supports higher resolutions?
I know there is
hw.camera.maxHorizontalPixels
and
hw.camera.maxVerticalPixels
for AVD
but I dont know where to edit those properties.
Any help would be greatly appreciated
Hi. I am trying to run a game console on the AV input of my HCT-PX30 (PX6 which is a Rockchip RK3326) car headunit, but it has high latency (IE: when I press a button on the video game controller it is delayed thus making any game unplayable). I suspect this is due to the scaling that the PX30 MCU is doing, so I had the idea of somehow injecting the raw signal from the game console directly to the LCD screen.
My question is does anyone know what signalling the TFT LCD touchscreen is using? For example, is it MIPI DSI on the ribbon connector, because if it is then I can just use a generic controller to forward the signals to the LCD, thus bypassing the PX30 running Android Pie
Otherwise, another idea is if anyone knows how to reduce the latency (maybe there is a hidden developers option to disable scaling or to adjust the latency somehow). Perhaps one trick would be to access 'video0' and use a different method of streaming the video (such as VLC with lower buffering) since the embedded 'AV' APK which is installed with the headunit might be poorly coded and designed more so for watching videos from say, a DVD player which doesn't need low latency.