用xuggler解码音频/视频时声音不稳定

2020年11月28日 14点热度 0条评论

因此,我正在为可在libGDX中使用的现有视频解码器编写音频解码器。问题是,当音频代码未穿线时,音频和视频就断断续续。音频将播放一个块,然后视频将播放一个块。

我的解决方案是执行一些多线程处理,并让视频工作正常工作(因为libGDX渲染线程不是线程安全的,将它们弄乱会导致坏事而不会失败)。然后自然的选择是使用线程处理程序来进行音频处理。

这样可以解决视频断断续续的问题,但不仅音频仍然断断续续,而且到处都有伪影。

这是我第一次尝试进行认真的音频编程,因此请记住,我可能不了解一些基本知识。执行程序服务是SingleThreadExecutor,其思想是需要按顺序解码和写入音频。

这是更新方法:

public boolean update(float dtSeconds) {
    if(playState != PlayState.PLAYING) return false;

    long dtMilliseconds = (long)(dtSeconds * 1000);
    playTimeMilliseconds += dtMilliseconds;

    sleepTimeoutMilliseconds = (long) Math.max(0, sleepTimeoutMilliseconds - dtMilliseconds);
    if(sleepTimeoutMilliseconds > 0) {
        // The playhead is still ahead of the current frame - do nothing
        return false;
    }


    while(true) {
        int packet_read_result = container.readNextPacket(packet);

        if(packet_read_result < 0) {
            // Got bad packet - we've reached end of the video stream
            stop();
            return true;
        }


        if(packet.getStreamIndex() == videoStreamId) 
        {
            // We have a valid packet from our stream

            // Allocate a new picture to get the data out of Xuggler
            IVideoPicture picture = IVideoPicture.make(
                videoCoder.getPixelType(),
                videoCoder.getWidth(),
                videoCoder.getHeight()
            );

            // Attempt to read the entire packet
            int offset = 0;
            while(offset < packet.getSize()) {
                // Decode the video, checking for any errors
                int bytesDecoded = videoCoder.decodeVideo(picture, packet, offset);
                if (bytesDecoded < 0) {
                    throw new RuntimeException("Got error decoding video");
                }
                offset += bytesDecoded;

                /* Some decoders will consume data in a packet, but will not
                 * be able to construct a full video picture yet. Therefore
                 * you should always check if you got a complete picture
                 * from the decoder
                 */
                if (picture.isComplete()) {
                    // We've read the entire packet
                    IVideoPicture newPic = picture;

                    // Timestamps are stored in microseconds - convert to milli
                    long absoluteFrameTimestampMilliseconds = picture.getTimeStamp() / 1000;
                    long relativeFrameTimestampMilliseconds = (absoluteFrameTimestampMilliseconds - firstTimestampMilliseconds);
                    long frameTimeDelta = relativeFrameTimestampMilliseconds - playTimeMilliseconds;

                    if(frameTimeDelta > 0) {
                        // The video is ahead of the playhead, don't read any more frames until it catches up
                        sleepTimeoutMilliseconds = frameTimeDelta + sleepTolleranceMilliseconds;
                        return false;
                    }

                    /* If the resampler is not null, that means we didn't get the video in
                     * BGR24 format and need to convert it into BGR24 format
                     */
                    if (resampler != null) {
                        // Resample the frame
                        newPic = IVideoPicture.make(
                            resampler.getOutputPixelFormat(),
                            picture.getWidth(), picture.getHeight()
                        );

                        if (resampler.resample(newPic, picture) < 0) {
                            throw new RuntimeException("Could not resample video");
                        }
                    }

                    if (newPic.getPixelType() != IPixelFormat.Type.BGR24) {
                        throw new RuntimeException("Could not decode video" + " as BGR 24 bit data");
                    }

                    // And finally, convert the BGR24 to an Java buffered image
                    BufferedImage javaImage = Utils.videoPictureToImage(newPic);

                    // Update the current texture
                    updateTexture(javaImage);

                    // Let the caller know the texture has changed
                    return true;
                }
            }
        }
        else if(packet.getStreamIndex() == this.audioStreamId)
        {
            IAudioSamples samples = IAudioSamples.make(1024, audioCoder.getChannels());
            Thread thread = new Thread(new DecodeSoundRunnable(samples));
            thread.setPriority(Thread.MAX_PRIORITY);
            this.decodeThreadPool.execute(thread);

        }

    }

这是音频线程:

private class DecodeSoundRunnable implements Runnable
    {
        IAudioSamples samples;
        int offset = 0;
        IStreamCoder coder;

        public DecodeSoundRunnable(IAudioSamples samples)
        {
            this.samples = samples.copyReference();
            this.coder = audioCoder.copyReference();
        }

        @Override
        public void run() {
            while(offset < packet.getSize())
            {
                 int bytesDecoded = this.coder.decodeAudio(samples, packet, offset);
                 if (bytesDecoded < 0)
                    break;//throw new RuntimeException("got error decoding audio in: " + videoPath);

                 offset += bytesDecoded;                  
            }
            playJavaSound(samples, 0);
            //writeOutThreadPool.execute(new WriteOutSoundRunnable(samples, 0));

        }
    }

解决方案如下:

通过创建仅写出音频数据的专用线程来解决此问题。之所以有效,是因为在写入数据时,mLine.write(byte [] bytes)将阻塞。

private class WriteOutSoundBytes implements Runnable
    {
        byte[] rawByte;
        public WriteOutSoundBytes(byte[] rawBytes)
        {
            rawByte = rawBytes;
        }
        @Override
        public void run() 
        {
            mLine.write(rawByte, 0, rawByte.length);
        }


    }